the slides show AMD at 50% of NVidia - given NVidia power leaking of recent (gtx 780ti needed a nuclear reactor to power it - and lets not talk about the overclocked gtx980ti needing 3 x 8 pin power to run)
54.1w lower than a GTX 950 - complete system pulling 88w whilst gaming - put the gtx 950 in the system and it was 150w
one question: the next generation of AMD cards will be R9-490 what about nvidia? will the next versions be 1080Ti or something similar?
Corky I take slides with a massive pinch of salt, the Fury series was hot and bothered, slow and expensive in my eyes. But I admit that I'm none to happy with the GFX card market as a whole with the prices they charge now.
I was really disappointed with the Fury X and AMD have been loosing my faith year after year as I bought into the 290x only to have duff memory on the card and the whole series riddled with the problem when it first launched.
Anyway until I see full reviews I will withhold my full judgement.
They might go down the route AMD did when they hit the 9000 scheme: next top end nvidia GPU with be the GTX X80Ti
Or maybe they'll go back to having the G[T[X]] at the end, so we'll have the 1080GTX. Then they can start messing with GTS and GTO suffixes again to muddy the waters
Typically when NV and AMD reach the point of using a 10 in the name they figure out a way out of it. Shame as a 1080Ti sounds great.
9800GTX -> GTX 280 as an example.
9800XT -> X800XT as a lesser example (as it technically is 10800XT).
WRT the references to hardware schedulers, I've not yet double-checked but I thought that was a feature of GCN since version 1?
Anyway, it's interesting they chose to show whole system power consumption as given the system has a base load, the difference would look a lot bigger showing the card's power draw in isolation. However I understand they probably don't want to give too much away about the card.
Also it's nothing remotely like a paper launch - they're not launching anything but they're showing working silicon and its power consumption at the moment they actually announce the silicon. Nvidia talk about future GPUs and extremely nebulous bullet points years in advance. Considering this is likely not AMD's final silicon revision, and they're still at least a few months off release, it's IMO quite telling they're far enough with both hardware and software to confidently show a working demonstration.
ah but will it run crysis?
Actually i've been waiting to make a purchase of a next generation graphics card in making a VR rig. But since TSMC has been in bed with apple their manufacturing of new gpu's has stalled for both manufacturers.
I'm hoping for a leap in performance but like cpu's it seems sub 20nm designs are only a leap in efficiency and not peak performance.
There are rumours that AMD might be sourcing GPU production from both TSMC and GloFo/Samsung (probably for different chips rather than the costly exercise of dual-sourcing). I suppose the high volume of mobile processors on bleeding-edge nodes should at least help to improve yields and costs by the time GPUs enter mass production. Historically, GPUs were one of the forerunners with contract foundries and likely had to eat large costs and poorer yields because of it.
WRT performance - we haven't actually seen the performance of any sub-20nm GPUs? However as GPUs have been pressing against the power wall for a while now, bringing down silicon power is hugely useful in improving performance. 14/16nm also offers significant density advantages over the previous GPU node, 28nm, so that's more room for improvement as current GPUs (i.e. GM200/Fiji) are also up against the physical size limit. The new node also offers improved performance for a given power draw.
I loved AMD's shenanigans with that one, as it went 9800 -> X800 -> X1800 -> 2900
Brilliant piece of mis-direction and slight of hand to simply loop round the thousands again.
It was only when they got to 7000 and OEM-only 8000 series cards, and people starting pointing out that they were about to loop back round to the 9800 series again, that they changed the numbering system completely.
Of course, nvidia haven't, to the best of my recall, had a 1000 series card yet (don't think they started doing thousands until the FX 5000 series) so perhaps there's hope for the 1080Ti yet
AMD are dual sourcing their GPUs it seems - GF/Samsung for lower end probably and TSMC for higher end. At least from what we have seen with the Apple chips,the Samsung made ones were smaller with slightly worse power consumption but the TSMC were larger in size. Samsung has been producing 14NM chips for longer than TSMC with their 16NM process and I suspect at least for smaller chips it might be more mature - remember the Polaris GPU shown is not massively larger than the 100MM2 A9.
With AMD probably using GF/Samsung for the mass market chips,it means they probably not only will be able to get them out quicker than Nvidia,but also it probably counts towards their WSA with GF too. Also,TSMC is probably be hammered for volume this year too.
There are currently 1 users browsing this thread. (0 members and 1 guests)