What CAT said
My calculations suggest we'll be looking at R7 270/ GTX 950 performance at either 60W, or < 40W, depending on which generation of product they're claiming 2.5x perf/watt against. Either way, you're talking 60fps @ 1080p medium settings, in a bus powered card. If the lower figure is right, than we could easily be looking at R9 380 performance (or higher) in a bus powered card, and R9 Nano at around 100W.
EDIT: Of course, just because the GPU is low TDP doesn't mean it'll be low-end: I can see no reason why AMD couldn't release a bus powered R7 270 equivalent and still charge £100+ for it. if the performance is there, you can charge pretty much what you want, and pricing is bound to depend on the yields from a relatively new node: small silicon doesn't necessarily mean cheap silicon any more...
Last edited by scaryjim; 14-01-2016 at 02:01 PM.
Interesting bit there is that I would expect the GPU part of an APU to burn about 50W, which if your prediction is right should when 14/16nm APUs come out make R7 270 an integrated graphics performance level.
If they have to drop max APU TDP to 65W (which has to happen at some point, perhaps 10nm or 7nm) that could still allow ~35W integrated graphics.
I suspect it's lower than that: The 95W 7850k and 65W 7800 have identical GPU sections (512 @ 720MHz) and the 35W mobile FX-7600p still manages 512 @ 600MHz. The performance is currently largely capped by memory bandwidth, and the 270 has roughly 4x the bandwidth available to a 7850k (256bit GDDR5 v dual channel DDR3), so the key is going to be what memory the APU comes with ... I reckon it'd need more than dual channel DDR4 to feed the IGP....
Well just look at the APU in the consoles with a 28nm chip. I would imagine a 14nm Zen based APU with Polaris graphics would allow for at least double the performance on a PC.
I don't think the power usage is that static though is it, as shown by the way that (specially with the A8-7600) you can select the TDP you want to operate at in the BIOS. So I would expect the 65W APU to just throttle back more.
Bandwidth? I guess that is where a stack of HBM2 would come in handy
That does become an interesting cost if you are right. Do you buy an APU that is bandwidth limited and needs an external GPU, or do you stump up the money for a 2GB stack of HBM ram and possibly not need a GPU. Interesting times!
I suppose it could depend on how the chip is tuned as how much memory bandwidth is required. The consoles certainly seem to punch above their weight when it comes to memory bandwidth with pritty slow DDR3.
Probably not, but it suggests to me that the CPU cores are likely to contribute a larger proportion of the peak TDP, if that's the first place they cut the specs to deliver reduced TDPs. Comparing the IGP of an APU to a discrete card is always going to be tricky, e.g. a discrete card has to budget for the memory controller and memory chips which isn't such a concern on the IGP (as it shares the memory controller with the CPU cores and the DIMMs are powered separately)...
Well I don't think the Xbox leverage's on the 32mb of memory much at all, the performance would depend on the 8GB of system RAM. The GDDR5 in the PS4 might run at a higher frequency, but GDDR5 is hobbled by latency and lower latency seems to play a large part in performance.
Thinking about it, if AMD offered an APU with R9 370/380 performance then 250-300 watt would be more than acceptable.
Would probably need the failed BTX chassis design though which Intel put together to try and cope with ever hotter Pentium 4 designs. Those would vent CPU heat to the outside world and would cope well.
Some Xeon chips are 160W as well as the silly AMD FX chips so a big chip wouldn't be outrageous. Would need to be really underclocked and undervolted to get it in a laptop though, and that seems to be important to AMD and Intel these days.
Perhaps a pair of 95W APUs in a single package working in SLI? Might end up with a lot of CPU cores too
A decent HSF and reasonably modern case should deal with a chip like that.
WRT power draw, if we take the ~100W (rough value taken from TPU/Tom's) of the 950 under load, we get around 40W for the system, which would leave 46W for the Polaris card assuming the CPU is drawing about the same for each system (they're running at the same FPS so differences would be mostly down to driver efficiency). So, the ~50W ballpark seems about right. That's logically the highest value that makes sense for the Polaris card in this demonstration.
Or looking at it another way, the Polaris card is 54W lower than the 950. The possibility they're only being lightly loaded because of the FPS cap doesn't really change the conclusion, in fact the lower the load on the cards, the greater the difference in power draw must be. If we were to assume the 950 were more lightly loaded and drawing 80W, that would leave 60W for the base system and therefore the Polaris would come out at 26W. And 40W for the rest of the system seems a bit low so we're likely looking at something less than full (100W) load for the 950.
I guess we'll just have to wait and see the final products to know for sure.
Last edited by watercooled; 14-01-2016 at 08:02 PM.
Yeah, when I did my calcs earlier in this thread I checked the power draws against some Hexus GTX 950 reviews and figures are in the right ballpark for system-at-wall - Hexus numbers were slightly higher but used an overclocked i7: and I'd put money on the AMD test using a stock-clocked processor of lower spec than that
Based on the AMD-quoted efficiency improvements I reckon 40Wish is about right for the Polaris card, putting the GTX 950 draw at 94W in their test, which also sounds about right - the GTX 950 has a listed 90W TDP iirc.
The be all is going to be the pricing, IMNSHO. How well will the 14nm process yield? What memory interface are they going for, and can they deliver in volume? Will they price by performance, or by silicon cost? So many questions ... wonder how long it'll be before we get answer...
The AMD slides state they used an i7 4790k in the test systems - an unusual choice as it's not Haswell's most efficient bin, but again I suppose it comes down to the 'if you think we're deliberately CPU-limiting the GPU to skew the results somehow, you're wrong' thing.
There are currently 1 users browsing this thread. (0 members and 1 guests)