Read more.16GB/s per pin graphics memory will thus arrive earlier than expected.
Read more.16GB/s per pin graphics memory will thus arrive earlier than expected.
Is this too little too late? After 10 years of stagnation in the GDDR development because there was no need to as there was no viable competition, only with the advent of HBMv1 did they bring out GDDR5X and now that HBMv2 is literally about to drop does a GDDR6 get poked out.
As GDDR5X was a very Intel-esque "do just enough to be ahead of the curve", I have very little faith in the GDDR6 and is a complete non-starter for me.
Someone made a comment elsewhere that brought this into perspective, though. At the sacrifice of some die space, GDDR5X is still competitive on speed vs HBM2 and as far as I understand, considerably cheaper.
Case in point: if Vega 10's prime card has two stacks of HBM2 at 204GB/s, totalling 408GB/s, then it's still behind the GTX1080 which has, to be fair, been out for a bit now. P100's 720GB/s is reliant on using 4 stacks, something that likely won't be available on a mainstream or consumer card for a while.
Fast, neat little GDDR chips will be good on cheap cards for a while yet, IMO.
I'm not convinced it's "considerably" cheaper. It certainly will be cheaper, but if it was that cheap I suspect nvidia would've used it throughout its range. The fact that only the top card in the range gets GDDR5X suggests it's still quite expensive - all the "cheaper" cards are making do with standard GDDR5.... And of course for the total cost of the card you've got to offset the interposer/HBM costs against the simplified PCBs since you don't have to run all those memory traces through them ... I suspect the cost differential really isn't that significant...
Besides, it's not just "some die space", it's also PCB space and power budget. When AMD were releasing Fury X they were talking about power savings in the region of 20W - 30W: that's 10% of the total power budget, which can either be used to make a lower power card (Nano @ 180W had excellent perf/watt), or ploughed into boosting the GPU clocks and getting higher absolute performance.
Erm, GTX 1080 peak theoretical memory throughput is 320 GB/s - so Vega's 2-stack HBM2 implementation will have over 25% more bandwidth available...
I'm not convinced cost factors into anything about the 1080/70 - looking at how much nvidia wants for them, I think the lack of GDDR5X is an artificial limitation to milk more money from consumers
Sorry sir, I forgot to do my homework yesterday, but I have it now!
Firstly, GDDR5X may not have been all that cheap as and when the last range of GPUs arrived from NVidia as it was rather new, but I can bet you the price of that stuff will go down as and when HBM2 becomes viable- which should be very, very soon if mass production is indeed happening and the lack of supply is clearing. If 1080Ti had ever arrived, my bet is that it would have been with a G5X configuration. Also, I'm no expert but it seems to me that the cost of producing some traces across the PCB is pennies at worst, whereas the cost of implementing a very new design for a mainstream consumer card (testing, reliability etc.) plus the lack of supply pushing price up, plus the premium for a space-saving fastest-per-pin memory module would be significant.
We all know that AMD's cards have been behind the curve for one or two generations at least on power consumption vs performance, and there's no doubting that HBM helps out with this issue. I'm sure that was a calculated eventual benefit for them when investing in the tech to start with. Even now that the second iteration has arrived and both companies are sharing their toys, it still doesn't seem to me that HBM could compete on lower end cards via single stack vs a GDDR6 setup with a couple of modules that comes in at nearly / same / marginally more bandwidth at a lower price. Give it a year or two more, though. At the higher end it's all about performance and the cost is less relevant anyway, HBM is worthwhile.
This was a mistake on my part - for some reason I confused the Titan's (480GB/s G5X) config with that of the 1080. Don't spend much time reading the specs for cards I can't afford, anymore.
Regardless my point was that for a couple more years GDDR memory could still be very relevant. It won't last, sure, but HBM isn't perfect yet.
Oh, definitely agree with that. I just don't think GDDR5X is currently a lot cheaper than Vega 10's 2-stack HBM2 implementation. If it was it wouldn't make business sense to use HBM2 at all.
GDDR5 clearly is a lot cheaper than both 5X and HBM2, which is why all current cards except nvidia's very high end products are using it rather than GDDR5X. When GDDR6 lands GDDR5X might drop in price enough for the RX480 / GTX1060 class cards to use it. But by then there'll also probably be a new generation of HBM coming, potentially making HBM2 cheaper.
As to power consumption, as I mentioned above the Nano was *very* competitive against the generation-equivalent nvidia cards at stock clocks; it was generally faster than a GTX 980 and had similar power consumption. It was only on the bleeding edge of the clock/performance curve that AMD needed more voltage and started suffering in power terms. That's something that Polaris didn't really fix, but from everything they've put out about Vega it looks to be tuned for higher clock speeds to start with. That should mean more performance at the same power, or the same performance at much lower power, compared to previous designs...
I doubt GDDR5X will drop much when GDDR6 starts to ship, in fact I am wondering if this gfx memory shift we are currently in is responsible (at least partly) for the recent increases in GPU prices.
Just how many types of gfx RAM are they manufacturing at the moment? Not long ago cards were either GDDR5 or DDR3. Now we have GDDR5, GDDR5X, HBM, HBM2 and GDDR6 on the horizon.....that's a big fragmentation of the manufacturing facilities.
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
It's far cheaper than HBM1, which is why AMD couldn't make money on them, and far easier to implement which is why they just got to mass production for HBM2 which has been holding up vega big time. GDDR5x and getting to market faster is also why NV was able to cash in with records (margins, profit, revenue etc). GDDR5x, and now GDDR6 is using existing equipment with tweaks. HBM required all new tools and lack of use keeps pricing high. BTW, NV is pushing Micron to up GDDR5x massively so they can use it on the ENTIRE refresh coming H2 except the very lowest cards, and then of course we'll see a new GDDR6 top end put out (Q1-Q2 next year? Depends on AMD I guess). So it's cheap enough NV wants it on almost all their cards.
Also note, if you can't use the bandwidth, what is the point? All 1080 needed was GDDR5x, not HBM. All the memory bandwidth in the world didn't help AMD right? Same story on Vega vs. whatever NV answers with (likely simply faster GDDR5x first, then GDDR6 later). Note HBM2 supposedly has a larger footprint, and requires a larger interposer thus raising costs again. Though we'll have to wait and see if that pans out.
http://www.anandtech.com/show/9969/jedec-publishes-hbm2-specification
"The potential of the second-gen HBM seems to be rather high, but the costs remain a major concern."
Not aware of the above changing. AMD should be going with a memory that doesn't hold up their product or price it to death. NV went mainstream with good enough bandwidth and sold the dickens out of the high-end. That is called good business. Without a major advantage, you shouldn't go with "blue crystals" so to speak Buzzwords don't win benchmarks or races to market, and can often raise prices for no benefit (see HBM1 and HBM2). HBM2 just hit mass production. If AMD chose GDDR5x (perhaps slightly faster chips) they would already be out.
https://pipedot.org/article/2BF9W
AMD files gpu theft suit just like NV did ~2014? Not on front page of hexus? Nvidia should have won, and AMD should too. I predicted this suit would come at some point, of course I thought NV should have won and AMD would use that case to build their own. I'm thinking samsung just had better lawyers or more payoffs...LOL. AMD has even less for suits, so expect it turns the same for them unfortunately. Then again they settled out of court, so we have no idea what NV got, but precedent wasn't set for AMD to use (that is unfortunate with a settlement out of court).
I wonder what happens after Intel's lic runs out shortly for NV stuff.
There are currently 1 users browsing this thread. (0 members and 1 guests)