Read more.Quote:
Claim "a historic leap in performance per watt for Radeon GPUs".
Printable View
Read more.Quote:
Claim "a historic leap in performance per watt for Radeon GPUs".
Slide embargo shown is 6pm today, so maybe some sites already have articles about this ready to go?
The deck says 4th generation, which most people would expect to call GCN 1.3, where does this GCN 4.0 come from?
Just seen a few of those slides in a video AMD just posted: https://www.youtube.com/watch?v=5g3eQejGJ_A&feature=youtu.be
Official AMD press release just arrived, please see updated article.
before they launch NVIDIA would have sold 50 million GTX 950's
Looks Like I'm going to build my first AMD machine.
AMD never used GCN 1.x names, and they're trying to get the media to stop using it as well.
Polaris is the code name for a set of modules, each at a specific capability. That includes what they are calling GCN 4 and what the media might have called GCN 1.3.
Hopefully AMD is getting things right now - teaser announcements with some meat in them is a good thing to see.
What i want to know is how their going to handle two different fabrication processes, aren't they splitting between TSMC whose using 16nm and GoFlo who uses 14nm?
If so does that mean there's going to be a split in the fab process used between high, mid, and low end cards, or is it going to be lucky dip?
Some card types will use a gpu made at TSMC, some at GloFo from the rumours. We don't know yet who is getting the low, mid and high end parts so you will just have to wait.
My guess would be that low end parts will go to GloFo to help pad out AMDs wafer purchase agreement in the face of lack of CPU sales. If high end is HBM2, then if TSMC is handling Fury then it makes sense to keep those high end parts there for now.
Lucky dip wouldn't make sense, you need massive volumes to dual source a single part.
Yea i didn't think lucky dip would make sense but stranger things have happened. :)
It should be interesting to see where they draw the line, i wounder if they'll use one for HBM cards and the other for GDDR cards.
The power consumption comparison versus a 950 is apples to oranges.
Game is frame-capped so saying you get 60fps with both does not equate to equivilently powerful GPU's - a 980Ti would have got the same frame rate and consumed far more.
950 is tested at stock clocks versus a lower clocked engineering sample from AMD. You could lower the 950's clocks until you just manage to hit the 60fps cap and reduce it's power consumption.
Can't wait to see what the actual performance/watt is like when tested properly.
The AMD chip is running at 850MHZ which is not far off from around the 900MHZ to 1GHZ mark the current AMD chips run at,and the GCN chips are not known to run at very high clockspeeds. Maxwell in comparison is tuned for much higher clockspeeds than GCN it appears.
Plus the Nvidia chips are running at very high clockspeeds - my GTX960 frequently hits around 1.3GHZ. But if the chip is under less load,it will be running at lower clockspeeds anyway,reducing power consumption. The Boost on Maxwell V2 is application dependent IIRC.
Edit!!
Plus we could say the same thing about the AMD chip,it could also be running at a lower clockspeed too if its not being taxed.
Why do people keep mentioning GTX950s? :/
Will keep an eye on these AMD appear to be getting back on track with their GPU's/drivers.
AMD compared a 120MM2 Polaris prototype chip with GDDR5 running at 850MHZ and 0.8V to a GTX950 in Star Wars:Battlefront and measured power consumption.
The total system with the Polaris chip used around 80W to 90W and with the GTX950,140W to 150W and both were locked at 60FPS to make sure they were doing the same amount of pixel pushing.
AMD - Next generation in paper launches...
smells of a company desperately trying to keep their head above the waterline
You mean like Nvidia releasing tidbits about Pascal a few months ago and its nowhere to be seen??
In fact it was March LAST YEAR:
http://hexus.net/tech/news/graphics/...en-pascal-gpu/
Another one 8 months later:
http://hexus.net/tech/news/graphics/...-japanese-gtc/
So nearly 9 months later,where is Pascal then and where is your whining in the comments of those articles??
Using your extremely warped logic,that would make Nvidia two generations ahead of AMD in paper launches.
Not even a demonstration of chip running.
You sound like some person who is desperately trying to justify their own purchases in some incredibly childish way.
Plus we all remember Rollo,so don't think you are trying to sound cool here.
This is quite interesting and is referred to in both the Anandtech article and the one from Hardware.fr:
Quote:
Originally Posted by Anandtech
So realworld tesellation performance should see a decent uplift too.Quote:
Originally Posted by Hardware.fr
What're the odds of seeing any rebranded parts? 16nm for the whole range would be nice, but I'd be amazed if they did it. At the same time, with such a big claimed gulf in performance any 28nm parts would stick out a bit
GTX950 is 'AFFORDABLE' am sure very few people can afford the GTX980ti or even the Fury X. What AMD have to do is to launch a better low end card this year.
So meh right now to AMD, Wattage on the Fury cards was a joke so they have a massive hill to climb in my book after that.
Nv GP206 card shown by Nv today comes with GDDR5 - so NO HBM for nv this time around??
So the power draw comparison didn't convince you they've addressed that "problem" then?
It was after all the main reason they demonstrated the power draw (IMO).
My guess is that we'll still have cards with GDDR on them for years to come, HBM is going to be reserved for high end cards at first and slowly trickle down when there's a need for it.
I should probably read the article next time!
Have you seen the reports on how much difference DX12 is going to make? So far it looks like AMD is streets ahead once games start utilizing it. We may well see a significant switch in the performance crown, even with older generation cards (as long as its GCN based), unless Nvidia releases something special (which they probably will in fairness).
the slides show AMD at 50% of NVidia - given NVidia power leaking of recent (gtx 780ti needed a nuclear reactor to power it - and lets not talk about the overclocked gtx980ti needing 3 x 8 pin power to run)
54.1w lower than a GTX 950 - complete system pulling 88w whilst gaming - put the gtx 950 in the system and it was 150w
one question: the next generation of AMD cards will be R9-490 what about nvidia? will the next versions be 1080Ti or something similar?
Corky I take slides with a massive pinch of salt, the Fury series was hot and bothered, slow and expensive in my eyes. But I admit that I'm none to happy with the GFX card market as a whole with the prices they charge now.
I was really disappointed with the Fury X and AMD have been loosing my faith year after year as I bought into the 290x only to have duff memory on the card and the whole series riddled with the problem when it first launched.
Anyway until I see full reviews I will withhold my full judgement.
They might go down the route AMD did when they hit the 9000 scheme: next top end nvidia GPU with be the GTX X80Ti ;)
Or maybe they'll go back to having the G[T[X]] at the end, so we'll have the 1080GTX. Then they can start messing with GTS and GTO suffixes again to muddy the waters ;)
Typically when NV and AMD reach the point of using a 10 in the name they figure out a way out of it. Shame as a 1080Ti sounds great.
9800GTX -> GTX 280 as an example.
9800XT -> X800XT as a lesser example (as it technically is 10800XT).
WRT the references to hardware schedulers, I've not yet double-checked but I thought that was a feature of GCN since version 1?
Anyway, it's interesting they chose to show whole system power consumption as given the system has a base load, the difference would look a lot bigger showing the card's power draw in isolation. However I understand they probably don't want to give too much away about the card.
Also it's nothing remotely like a paper launch - they're not launching anything but they're showing working silicon and its power consumption at the moment they actually announce the silicon. Nvidia talk about future GPUs and extremely nebulous bullet points years in advance. Considering this is likely not AMD's final silicon revision, and they're still at least a few months off release, it's IMO quite telling they're far enough with both hardware and software to confidently show a working demonstration.
ah but will it run crysis? ;)
Actually i've been waiting to make a purchase of a next generation graphics card in making a VR rig. But since TSMC has been in bed with apple their manufacturing of new gpu's has stalled for both manufacturers.
I'm hoping for a leap in performance but like cpu's it seems sub 20nm designs are only a leap in efficiency and not peak performance.
There are rumours that AMD might be sourcing GPU production from both TSMC and GloFo/Samsung (probably for different chips rather than the costly exercise of dual-sourcing). I suppose the high volume of mobile processors on bleeding-edge nodes should at least help to improve yields and costs by the time GPUs enter mass production. Historically, GPUs were one of the forerunners with contract foundries and likely had to eat large costs and poorer yields because of it.
WRT performance - we haven't actually seen the performance of any sub-20nm GPUs? However as GPUs have been pressing against the power wall for a while now, bringing down silicon power is hugely useful in improving performance. 14/16nm also offers significant density advantages over the previous GPU node, 28nm, so that's more room for improvement as current GPUs (i.e. GM200/Fiji) are also up against the physical size limit. The new node also offers improved performance for a given power draw.
I loved AMD's shenanigans with that one, as it went 9800 -> X800 -> X1800 -> 2900
Brilliant piece of mis-direction and slight of hand to simply loop round the thousands again.
It was only when they got to 7000 and OEM-only 8000 series cards, and people starting pointing out that they were about to loop back round to the 9800 series again, that they changed the numbering system completely.
Of course, nvidia haven't, to the best of my recall, had a 1000 series card yet (don't think they started doing thousands until the FX 5000 series) so perhaps there's hope for the 1080Ti yet ;)
AMD are dual sourcing their GPUs it seems - GF/Samsung for lower end probably and TSMC for higher end. At least from what we have seen with the Apple chips,the Samsung made ones were smaller with slightly worse power consumption but the TSMC were larger in size. Samsung has been producing 14NM chips for longer than TSMC with their 16NM process and I suspect at least for smaller chips it might be more mature - remember the Polaris GPU shown is not massively larger than the 100MM2 A9.
With AMD probably using GF/Samsung for the mass market chips,it means they probably not only will be able to get them out quicker than Nvidia,but also it probably counts towards their WSA with GF too. Also,TSMC is probably be hammered for volume this year too.
I am very much look to next gen AMD gpu's. However, whilst there will be a Polaris arch, I don't think Polaris has anything to do with Arctic Islands. For a start, the arch name is completely different as Polaris is the name of a star whereas Arctic Islands is an umbrella name of a group of islands in the Arctic (and Greenland is the top-dog gpu in Arctic Islands). I think Polaris is going to be the next gen after Arctic Islands and this new arch is going to be based on constellation and star names. I think Polaris will have 2.5 times more performance per watt than Arctic islands.
So we're going to have Arctic islands and Polaris both in the same year? One with a 2x increase in performance and another 2.5x increase towards the end of the year, best not buy an AMD card at the start of this year then if their just going to release another even faster card a few month later. :)
Not that I recall. It's unlikely Polaris would be a separate and later-arriving family considering it has been demonstrated and we've heard nothing about Arctic Islands.
I don't expect much performance improvement - with the cost and yield limitations of a new node we won't see anything as big as the current 600mm2 flagships, and so flagship transistor counts will probably remain quite similar. There's been plenty of time for foundries to get very good at making 28nm chips BIG. Assuming the die area scales with (28/16)^2 (I know this is an oversimplification, but I'm no electrical engineer), then the 120mm2 chip shown working would be similar in size to a 380X (370mm2 vs 360mm2) - and 60fps in battlefront at medium is a fair bit slower than you'd expect from that card. It's a massive improvement in perf/watt though, total system power is down to a third of what a 380X drew in hexus tests (the major difference in the rest of the system is that hexus uses a 4770k at 4.4GHz, whereas AMD used a presumably stock speed 4790k)
They didn't avoid using 10, they just switched to roman numerals :p
I would have expected more love for the massive reduction in power consumption. Whats happened to all the users that bang the more efficiency is better drum?
This looks very interesting and I'm surprised the chips are built on 14NM and not the TSMC 16NM process.
Things used to scale pretty much like that (if you compared eg one Intel process to another, not TSMC to Intel as companies measure slightly differently).
Recent nodes aren't scaling so well though, so the 3x improvement you calculated there is probably on the optimistic side, but we don't now how much because I it will be design dependant. Still, I would hope for 2x density, which would put it at the equivalent to 240mm^2 which is around gtx960 sized (228 mm^2). Perhaps that is one reason why they used a 950 in their comparison, who knows.
Tonga always seemed a bit of a large chip for what you get, perhaps that is the alleged unused 128 bits of memory interface, perhaps there is more stuff on there that AMD aren't telling us about, so I don't think the 380X is a good chip to compare sizes against.
Well according to AMD they are. What makes you think different?
This bit in the Anandtech article:
http://anandtech.com/show/9886/amd-r...architecture/3Quote:
Originally Posted by anandtech
Well this is from AMD.
https://www.youtube.com/watch?v=5g3eQejGJ_A
Transistors per cost and performance didn't scale well to 20nm planar which is why it saw limited adoption, however this apparently improves a lot with the FinFET nodes, so it's not quite so doom-and-gloom. Yields are always a potential issue with bleeding-edge nodes, and Nvidia always complains, however this time there will be some maturity on both nodes because of the mobile processors having been in volume production for ages now.
Maybe it's because it's not about efficiency of products from their favourite brand? :innocent:
Thats probably true LOL. Still you would expect a few to comment as it sounds like a dream.
Much higher performance for much lower power.
New features in performance.
Upped frequency.
2x~ performance per watt than Nvidia.
If the highend cards are 250watt then this will be a massive jump in performance. If they are closer to the top end of the ATX spec then Polaris would be a very fitting name for the performance on offer.
We need Scary Jim to make another performance prediction.
It's currently hard to know how the efficiency improvements are split between microarchitecture and process.
However also remember we're basically getting two node jumps in one as GPUs skipped over 20nm. It may not be quite that much of a jump in terms of density as AFAIK at least some of the metal layers are carried over from the 20nm node, however as for the transistors themselves we should see a big improvement in both power and performance. This seems to be clearly true on the mobile side at least.
Couple of things there. Firstly, scaling down a GPU to 120mm^2 isn't that interesting because I already have one that fast and as long as it is quiet enough I am OK with the amount of heat it makes.
Where it gets interesting is where they use the transistor density to add more transistors to the design. But then we don't know how much the silicon is going to cost per square mm, so we expect AMD will be able to make a range that is twice as fast as the current range, but we don't know how expensive they will be and that is kind of key. If I can get the performance of Fury for £150, I'm in.
Finally, Nvidia are doing all the same stuff, so "power vs Nvidia" is pretty meaningless until we see what Pascal can do.
Still, we are getting away from 28nm, something I have been waiting for for a long time. Bring it on, credit card on standby :D
I have a GTX960 - is that close enough?? :p
I think the end results will be similar,but you need to consider one of the main reasons Nvidia had better efficiency and AMD somewhat gave it up is down to the move to software scheduling with Kepler onwards and the move to hardware scheduling with GCN onwards.
If Nvidia start moving towards doing more in hardware again,they will be hitting the same issues that AMD did with GCN and they did with Fermi with regards to power consumption and there has been a lot of noise about the compute aspects of Pascal IIRC.
I dont like it "_ NICE NINCE NICE!!
Not until we get more details, frankly :p
If we could find a decent source for 2.5x perf/watt I could have a stab - you'd be looking at Fury X performance at around 110W TDP. But as we know the perf/watt increase isn't linear: Nano managed, iirc, 85% of the performance at 64% of the power, which gave the 2x perf/watt increase they discussed then. If we're looking at 2.5x perf/watt at < 100W already, that could easily be 2x at 175W and 1.5x at 275W - i.e. very similar perf/watt gains to the jump from 290X -> Fury X/Nano.
Previously the perf/watt increase was relative to their own cards, so let's assume the 2.5x has a genuine source and it's given on the same basis. They're doing comparisons based on same performance at lower power, so that 2.5x is probably reflecting the same performance at 40% of the power draw of a current gen card (or equivalent). Of course, we don't know if the 2.5x figure is based off the Hawaii/Tonga/Tahiti/etc. cards, or if it's based off Fury/Nano, so let's look at both.
So, let's ignore Fiji for a minute, and look at the GDDR5 GCN cards. There is no AMD card that's directly equivalent to a GTX 950, performance-wise: it sits in a gap between the 370 and 380. But theoretically an AMD card with that level of performance might have a TDP between those two cards, so let's assume we're looking at a current gen card with a TDP ~ 150W. A new gen card with the same performance but 2.5x perf/watt would have a TDP of around 60W. Looks pretty good so far, doesn't it ;)
Now let's consider Fiji. Nano has a TDP of 175W: only a little bit more than our theoretical GTX 950 equivalent. If there's a 2.5x perf/watt increase over the Fiji generation, Nano would have a TDP of just 70W. That's low enough to be bus-powered. It'd push the TDP of our GTX 950 equivalent down considerably: maybe as low as 30W. And that's a figure I've seen bandied around a few places as making sense if the AMD power draw figures on the slides are for a whole system. It just might be accurate. And that would be fricking incredible...
So, no actual performance predictions: we haven't seen any specs or TDPs yet. But my headline prediction for Polaris - we could get Nano performance on a bus-powered card.
Thanks Jim. All I could muster was like a dream. So to put that in perspective we could be looking at cards that pull about half the power of a GTX970M laptop card with performance to soundly beat a GTX980 desktop.
A few more details.
https://www.youtube.com/watch?v=hvD37UUcdIo
HDR monitor support is a very welcome addition.
That'd be convenient, because it has a TDP of 150W ;)
Well, I'm probably painting a much rosier picture than we'll get in reality.
If the 2.5x perf/watt is at the R9 270 performance level but based on Fiji tech, then that hypothetical Fiji-based R9 270 would only draw around 90W. I suspect we'd then see the perf/watt drop off as you increase the target TDP, in the same way the Fiji had 2x the perf/watt at 175W, but only 1.5x at 275W.
Even then you could (should?) be looking at Nano performance at around 90W - roughly the same TDP as a GTX970M (or a desktop GTX 950, incidentally....)
EDIT: just to quickly add that if the figures are right we're already looking at around 4x the perf/watt of the 290X - a good step towards the target 25x by 2020...
From the CES video they said the drivers haven't been optimised yet so you would expect we have fair amount of performance and power efficiency still to come.
It appears the Pascal based automotive board JHH showed off,seems to only have had Maxwell based GTX980M boards instead.
lol, no wood screws this time though at least :D
http://semiaccurate.com/2016/01/11/n...m-competition/
The other penny drops:
http://wccftech.com/amd-shows-enthusiast-polaris-ces/
Another larger Polaris chip was demonstrated to journalists!
http://wccftech.com/amd-confirms-pol...ktops-laptops/
back to school starts (in the uk at least ) in week 22 (july) - which put polaris on track for a launch at computex
Hmm. Could this mean Zen based CPU's will also release around the same time.
After 5 pages all I want to know is...
Will there be a card with 750ti perf at 40-50w. If so, they'll sell myllioons of them.
I think the lowest end of the discreet market will soon become all about how much performance can be had for 70Watt~ or what APU to buy.
What CAT said :)
My calculations suggest we'll be looking at R7 270/ GTX 950 performance at either 60W, or < 40W, depending on which generation of product they're claiming 2.5x perf/watt against. Either way, you're talking 60fps @ 1080p medium settings, in a bus powered card. If the lower figure is right, than we could easily be looking at R9 380 performance (or higher) in a bus powered card, and R9 Nano at around 100W.
EDIT: Of course, just because the GPU is low TDP doesn't mean it'll be low-end: I can see no reason why AMD couldn't release a bus powered R7 270 equivalent and still charge £100+ for it. if the performance is there, you can charge pretty much what you want, and pricing is bound to depend on the yields from a relatively new node: small silicon doesn't necessarily mean cheap silicon any more...
Interesting bit there is that I would expect the GPU part of an APU to burn about 50W, which if your prediction is right should when 14/16nm APUs come out make R7 270 an integrated graphics performance level.
If they have to drop max APU TDP to 65W (which has to happen at some point, perhaps 10nm or 7nm) that could still allow ~35W integrated graphics.
I suspect it's lower than that: The 95W 7850k and 65W 7800 have identical GPU sections (512 @ 720MHz) and the 35W mobile FX-7600p still manages 512 @ 600MHz. The performance is currently largely capped by memory bandwidth, and the 270 has roughly 4x the bandwidth available to a 7850k (256bit GDDR5 v dual channel DDR3), so the key is going to be what memory the APU comes with ... I reckon it'd need more than dual channel DDR4 to feed the IGP....
Well just look at the APU in the consoles with a 28nm chip. I would imagine a 14nm Zen based APU with Polaris graphics would allow for at least double the performance on a PC.
I don't think the power usage is that static though is it, as shown by the way that (specially with the A8-7600) you can select the TDP you want to operate at in the BIOS. So I would expect the 65W APU to just throttle back more.
Bandwidth? I guess that is where a stack of HBM2 would come in handy :D
That does become an interesting cost if you are right. Do you buy an APU that is bandwidth limited and needs an external GPU, or do you stump up the money for a 2GB stack of HBM ram and possibly not need a GPU. Interesting times!
I suppose it could depend on how the chip is tuned as how much memory bandwidth is required. The consoles certainly seem to punch above their weight when it comes to memory bandwidth with pritty slow DDR3.
Probably not, but it suggests to me that the CPU cores are likely to contribute a larger proportion of the peak TDP, if that's the first place they cut the specs to deliver reduced TDPs. Comparing the IGP of an APU to a discrete card is always going to be tricky, e.g. a discrete card has to budget for the memory controller and memory chips which isn't such a concern on the IGP (as it shares the memory controller with the CPU cores and the DIMMs are powered separately)...
Well I don't think the Xbox leverage's on the 32mb of memory much at all, the performance would depend on the 8GB of system RAM. The GDDR5 in the PS4 might run at a higher frequency, but GDDR5 is hobbled by latency and lower latency seems to play a large part in performance.
Thinking about it, if AMD offered an APU with R9 370/380 performance then 250-300 watt would be more than acceptable.
Would probably need the failed BTX chassis design though which Intel put together to try and cope with ever hotter Pentium 4 designs. Those would vent CPU heat to the outside world and would cope well.
Some Xeon chips are 160W as well as the silly AMD FX chips so a big chip wouldn't be outrageous. Would need to be really underclocked and undervolted to get it in a laptop though, and that seems to be important to AMD and Intel these days.
Perhaps a pair of 95W APUs in a single package working in SLI? Might end up with a lot of CPU cores too :D
A decent HSF and reasonably modern case should deal with a chip like that.
WRT power draw, if we take the ~100W (rough value taken from TPU/Tom's) of the 950 under load, we get around 40W for the system, which would leave 46W for the Polaris card assuming the CPU is drawing about the same for each system (they're running at the same FPS so differences would be mostly down to driver efficiency). So, the ~50W ballpark seems about right. That's logically the highest value that makes sense for the Polaris card in this demonstration.
Or looking at it another way, the Polaris card is 54W lower than the 950. The possibility they're only being lightly loaded because of the FPS cap doesn't really change the conclusion, in fact the lower the load on the cards, the greater the difference in power draw must be. If we were to assume the 950 were more lightly loaded and drawing 80W, that would leave 60W for the base system and therefore the Polaris would come out at 26W. And 40W for the rest of the system seems a bit low so we're likely looking at something less than full (100W) load for the 950.
I guess we'll just have to wait and see the final products to know for sure.
Yeah, when I did my calcs earlier in this thread I checked the power draws against some Hexus GTX 950 reviews and figures are in the right ballpark for system-at-wall - Hexus numbers were slightly higher but used an overclocked i7: and I'd put money on the AMD test using a stock-clocked processor of lower spec than that ;)
Based on the AMD-quoted efficiency improvements I reckon 40Wish is about right for the Polaris card, putting the GTX 950 draw at 94W in their test, which also sounds about right - the GTX 950 has a listed 90W TDP iirc.
The be all is going to be the pricing, IMNSHO. How well will the 14nm process yield? What memory interface are they going for, and can they deliver in volume? Will they price by performance, or by silicon cost? So many questions ... wonder how long it'll be before we get answer... :undecided
The AMD slides state they used an i7 4790k in the test systems - an unusual choice as it's not Haswell's most efficient bin, but again I suppose it comes down to the 'if you think we're deliberately CPU-limiting the GPU to skew the results somehow, you're wrong' thing.
Yeah I'd thought of that when looking at the power consumption numbers - if anything you'd expect CPU power draw to be higher on the Polaris system than the 950 one.
But WRT the 4970k thing, I don't imagine picking a 4970 non-k or an i5 would have made much of a difference to performance, but system power consumption might have been lower (though it depends on what clocks they end up running at under partial load), making for a more impressive difference on the slide. But again, picking something else might have led to people nit-picking the choice as something suspicious.
It seems like AMD have given away a really small amount of detail to get people interested, but not enough to reveal much about performance/positioning of the card - e.g. the FPS cap prevents us from seeing the card's actual performance.
It could be a cunning ploy to force Nvidia's hand too,to see what their response is.
I hope AMD do get a win out of this. I think if they fail with the 400-series it might end up being very bad in the long run for us consumers.