Well that performance is a bit sucky, was expecting the price but with more like a 50% to 100% boost over the 980TI.
On the other hand, if GTX1080 is only 25% faster than the mainstream Polaris parts, then that bodes well for AMD this round.
Yes, seems that way.
http://videocardz.com/59718/nvidia-g...tion-explained
I'll take 3 please!, with a G-Sync display as well!
, gotta make sure they have enough $ for their R&D budget!
.
A good start from nvidia.
Personally I will wait to see what AMD have and then make my choice on red or green.
I will then see what the 3rd parties do with the reference cards.
Intel Core i5-6600K 3.5GHz Quad-Core Processor | Noctua NH-U14S 55.0 CFM CPU Cooler | Gigabyte GA-Z170X-Gaming 5 ATX LGA1151 Motherboard | Corsair Vengeance LPX 16GB (2 x 8GB) DDR4-3200 Memory | MSI GeForce GTX 970 GAMING 4G | Samsung SM951 128GB M.2-2280 SSD | Samsung 850 EVO-Series 500GB 2.5" SSD | WD BLACK SERIES 1TB 3.5" HDD | Corsair RMx 650W 80+ Gold Certified Fully-Modular ATX Power Supply | Fractal Design Define S ATX Mid Tower Case
With this day and age am still seen an SLI connector it is time for the bridge bus signals to pass through Pcie 3.0
Maxwell does async compute, plenty of data available to show it.
The problem is that the way its implemented requires developers to write code in a specific manor (need to target AC to a specific unit or keep the queue length 31 or below), rather then just throwing commands into a queue blindly. If they don't do it right, they stall the pipeline. The new cards have something to make them time-slice AC better.....although the jury is still out on what difference it will make.
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
All the comments about the 'only' 25% extra performance being disappointing - are you all on this planet? That's an outstanding improvement, especially within a similar thermal/power envelope to the 980. Where on Earth are they going to get 50-100% improvements from? Those days are long past - can't remember those sort of increases since the Geforce 8000 series. If these cards deliver what has been implied here, they will step things up very well. Will be very interesting to see what comes with Polaris.
Well the increase isn't much different from previous generations where they've stayed on the same process node - while the expectation is that the 14/16 finfets should bring similar games to the previous jumps in process nodes. And they probably have, but we've not seen the big big products yet - both AMD and nVidia are focusing on slightly smaller chips which is sensible for a new process. Albeit the big chip price from nVidia is a little disappointing and perhaps part of the reason for disappointment in the 25% performance increase claim.
Note that's just performance increase. The improvement in efficiency for said performance is most welcome.
The GTX 980ti is 8B transistors, the 1080 is only $50 lower launch price and weighs in at 7.2B transistors. That is splitting the difference in launch price with the "plain" GTX980 which was cheaper on launch than the 1080 by $50.
So we get 800M fewer transistors in our graphics card than the similarly priced older card. If the 1080 was the same launch price as the 980 then that wouldn't be so bad, but it isn't they raised the price on us once again.
But why would I expect more transistors? Well we have been stuck on 28nm for years now. This isn't a half node shrink to 22nm, it isn't a full node shrink for 20nm, it is a node and a half shrink to 16nm. Even in the modern world of diminishing returns and people talking about "post Moore's law" that is about a doubling of transistors.
So, my expectation was that Nvidia would stick to the same launch price (oops) and use roughly twice the 5.2B transistors of a GTX 980.
Another data point here, the old 980 was 5.2B vs Titan at 8B transistors, a factor of around 1.5 difference.
GTX 1080 has 7.2B vs GP100 with 15.3B transistors, a factor of 2.1. The 1080 is way smaller than the Titan silicon. GP100 vs GM200, well the big Pascal chip is almost double the transistors which is what I would expect.
So, the 1080 is quick, but imagine what it would be like if it had 10B transistors in it?
I agree on that point - they could do more if they then expanded the die, perhaps using the same size as on the old process node would give huge performance gains for little extra cost to them, and a still lower power draw. However, I don't think Nvidia would want to draw those guns straight away - they can always hold that back for another generation, tempting another set of upgrades in a year or so but without much extra engineering on their part (same architecture and process node, more transistors). Cynical?
So it seems like it could indeed be an early silicon batch then, to get it out the door as soon as possible, with early adopters paying more for nothing basically. It does make more sense though, it would very seem strange for early silicon to clock higher than volume production, and I doubt they'd want to push it too far and drop efficiency in the review samples either.
It's reassuring from a review PoV though, provided a couple of sites do actually verify those claims whenever the standard cards arrive.
Last edited by watercooled; 09-05-2016 at 05:35 PM.
Ye agreed, its disappointing but its the way of the industry atm I think. I have no doubt Nvidia could pull out much more performance if they wanted, but I think you're looking to make the difference much bigger next year with wider availability of HBM2. In a sense, AMD saying they were aiming for mid-range, whatever that actually means to them, gave Nvidia a free pass to take it slow on this generation, though realistically with that statement being a response to Nvidia from AMD, there wasn't any conscious planning behind it, but they probably felt safe enough that they could do so.
It's as I feared when we first started hearing about the next series unfortunately, while this release is definitely a nice improvement and with efficiency to boot, it's still a pretty half baked release since HBM2 isn't quite yet. I think they're going to save the bigger, faster pascal chips to really drive home the improvement once they include HBM2. AMD beat them to HBM1 and they probably don't want to let that happen again. But who knows, if AMD starts pulling back into the market now, we might see an end to that practice soon enough, we can hope atleast.
I'm still interested to see what the difference is in actual game benchmarks too. For now we have to assume that the 1080 is as fast as 2x 980s, but to the best of my knowledge we don't know what that data is based upon, and how cherry picked it is. Afterall it wasn't long ago they were saying it would be 10x faster than Maxwell in particular workloads.
To be honest this release is pretty much what I expected back when we first heard about 16nm GPUs, ~300mm2 GPUs first with the bigger ones coming much later (you might even find a post with me saying the 300mm2 estimate if you search for it), so I wasn't far off. It all pretty much adds up when you factor in a relatively small reduction in per-transistor cost (at least early in the ramp), yield ramp on a completely new process and the fact we've transitioned from a very mature yield with reticle size dies meaning they got more from 28nm than they perhaps would have if 20nm had been usable for GPUs.
What I didn't really expect was, as others have said, the price hike from Nvidia.
As usual though, the 'it's the competitions fault because they're not targeting high end' blame-game is nonsense - people need to realise the time scales involved in designing and producing complex processors like GPUs. AMD started talking about Polaris on-record long after Nvidia would have finalised the layout for GP104 - although it seems like a trivial thing to change, it takes a lot longer than a few months to layout a new die and go through several prototyping/production runs, final silicon production, card manufacturing etc. AMD will know this better than anyone, which is why they can do things like the power consumption demos they did a few months ago, as by that point it's far too late for Nvidia to start changing silicon. About the only thing that can change at this late stage is pricing.
Ye agreed. What were seeing currently I think is somewhat similar to what happened with the next gen consoles. Console games were stuck on the 360/PS3 for so long that the developers got very smart about pulling the most performance possible out of the system, just as 28nm got to the point where they were pulling out far more performance than they probably ever expected, and so now with 16nm/14nm, were getting improved performance off the bat, but it's going to take a while for yields to get better and the designs to get better before we really see what it is capable of.
Definitely agree on the price hike though, depending on how AMD handle their own pricing, considering theyre usually cheaper anyway even before Nvidia ups their prices, they could really clean up if things roll their way. I think to an extent the perception of AMD is changing too, not that it was ever as bad as people make out in the first place I don't think, but people appear to be taking them more seriously again. I was relatively happy buying a 1070 if they stuck to their old pricing, though I was going to wait for AMD's offering anyway, but with the price hike I'm going to be keeping a particularly close eye on AMD's release. If a relatively small performance drop will save me just short of £50 after VAT from Nvidia's price hike, and potentially more from AMD selling cheaper (though the original prices of the 390 and the 970 are the same) then I'm definitely going to consider the option.
There are currently 1 users browsing this thread. (0 members and 1 guests)