Yeah I'd thought of that when looking at the power consumption numbers - if anything you'd expect CPU power draw to be higher on the Polaris system than the 950 one.
But WRT the 4970k thing, I don't imagine picking a 4970 non-k or an i5 would have made much of a difference to performance, but system power consumption might have been lower (though it depends on what clocks they end up running at under partial load), making for a more impressive difference on the slide. But again, picking something else might have led to people nit-picking the choice as something suspicious.
It seems like AMD have given away a really small amount of detail to get people interested, but not enough to reveal much about performance/positioning of the card - e.g. the FPS cap prevents us from seeing the card's actual performance.
It could be a cunning ploy to force Nvidia's hand too,to see what their response is.
I hope AMD do get a win out of this. I think if they fail with the 400-series it might end up being very bad in the long run for us consumers.
Well it looks like Polaris will deliver the goods, and then some within the laptop and notebook markets so I think they have two wins already. If AMD can transform the power saving into performance with the high spec Polaris chips, then we should see very large jump in processing power as the power use increases. I'm fully expecting AMD to run away with DX12/Vulkan and VR performance with Polaris.
Last edited by jigger; 15-01-2016 at 07:38 PM.
If they can bolt a couple of CPU cores on that sort of GPU they'd have a heck of a mobile APU! 14nm could allow some impressive performance from APUs but as others have said I guess it will come down to yield and pricing, especially at first, which is why we probably won't see 14nm APUs for a while after the discrete parts.
We probably won't see Polaris in an APU until ZEN hits the market towards the end of the year, that's if nothing goes wrong or gets delayed, all the signs are looking good for AMD so fingers crossed.
I'm not sure they will run away with it. CPU usage seems good in DX12, but DX12 is a very small market. But look at what nVidia managed with Maxwell - if they get anything close to the same power improvements from finfet that AMD claim then pascal is going to be quite amazing.
The R9 290 pritty much made £1000 Maxwell card obsolete for a third of the price. The second round of Maxwell cards come a year? after Hawaii, cost a lot more and the DX11 performance is nothing special and they are still getting a hard time from the R9 290 cards today. Apart from the low power use, I'm not sure Nvidia managed that much with Maxwell. The next gen API performance is very questionable compared to Hawaii never mind Fiji.
It's the power usage I'm talking about - quite incredible really. If they get the same kind of gains from finfet that AMD did then they've got a lot of scope for all sorts of good things.
The power use was very good and much better than first round of Maxwell cards, but people made too much of the TDP numbers Nvidia marketed. In truth, the real world difference between Hawaii and Maxwell wasn't that big of a deal especially considering the price difference and time it took Nvidia to get the card to market.
Anyway, I was talking about the high spec cards, and I'm not sure how power use is related to DX12/Vulkan and VR performance?
Last edited by jigger; 15-01-2016 at 10:59 PM.
I think that was totally necessary to have any chance of Tegra working, and Nvidia did throw a lot behind Tegra. So far it hasn't worked out for them, I think that is a shame specially as a lot of the image pain was from their old DX9 shaders in the older Tegra devices (like the HTC One X I used to have) so Maxwell really needed to be power efficient. I think it is too late, tablet and phone markets are now tied up and gone to Nvidia leaving things like Chromebooks for which Tegra is really very good (I got my wife a Tegra based HP Chromebook for Christmas).
There seem to be claims that gaming laptops are going to be the next big thing, but I see that as less likely than the Ultrabook revolution that never actually happened.
So yes Nvidia have superb power usage, but I don't know anyone who really cares, if they cared about power or heat they wouldn't buy 970s or water cooled SLI dual 980ti setups. They want the speed, and they trust the drivers.
I'm sure the designers of the PS4 and XB1 would have loved more power efficiency, but it seems that wasn't a deal breaker either. I think those are the most power sensitive mass production devices I can think of atm.
if you look at the factory overclocked gtx 980ti - they are right up there with the highest end amd cards in power draw
as cat said - only the gtx 750ti and the gtx 980 are low power cards , the others? not as great as nv want you to believe
But the problem is Nvidia has ripped a lot out to get there. AMD has primarily higher power consumption due to their use of hardware scheduling and Maxwell ripped out a lot of compute from Kepler to help too. This is why the GK210 is still being used in commerical markets.
Nvidia has advertised Pascal has having a renewed focus on compute and if they do move to adding back more hardware functionality its going to mean more power consumption,especially if they like AMD are having a renewed push towards things like VR.
There is no free lunch here. I can see them converging in some ways and some of their figures are some what marketing based(The GTX970 is meant to be a lower power consumption and TDP class than a GTX770 when it really is not for example).
Nvidia has probably trialled as much power saving tech as they can with Maxwell,beyond using HBM/HBM2,and adding functionality in hardware is probably going to add to power consumption.
AMD has a lot of functionality still in hardware(especially regarding some of the stuff added for VR which they had a year before Nvidia and still have more functionality now),and the same even goes for support for adaptive sync at the GPU level,etc.
OTH,they have probably not as refined as Nvidia with regards to power saving tech,so probably have caught up in Polaris,in this regard and might even start to remove some stuff which is not really required in hardware,to cut on power consumption.
Basically,I don't think Nvidia has anywhere as much of a scope to drop power consumption on a purely design level as AMD does,since AMD has simply more functionality they could drop to reduce power consumption. Nvidia has done as much as they can and thats why they are forced to still use Kepler for some markets.
Edit!!
Do I think the next 300MM2 to 400MM2 Nvidia GPU is going to consume a bit less power than the next AMD 300MM2 to 400MM2 one??
Yep,since the AMD GPU will be dual use with transistors and portions dedicated towards the GPU towards pure DP performance,a bit like the HD7970 vs the GTX680,and the Nvidia GPU won't be top end but since they will wait until they can get a 500MM2 monster out first.
People were quick to judge that GCN1.0 was worse off in efficiency than Kepler,but failed to realise than it was not really a valid uarch comparison.
If you looked down the range and equivalent GPUs like Pitcairn and the GK106,they traded blows and I think for most cases it will be the same with Polaris against Pascal.
Last edited by CAT-THE-FIFTH; 16-01-2016 at 12:27 AM.
As a casual gamer I don't really get the top end cards but I'm really excited by this and whatever nvidia follow up with. Low power, smaller cards with the meat to play the top titles whenever you need it
There are currently 1 users browsing this thread. (0 members and 1 guests)