Read more.GDDR5X is facilitating data rates of more than 13Gb/s in early tests.
Read more.GDDR5X is facilitating data rates of more than 13Gb/s in early tests.
I still question how useful this will be.....
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Seems bizarre to me. Why spend the cash to develop a technology between two existing ones i.e less advanced than HBM, instead of just investing in HBM? I don't see enthusiasts wanting less than HBM and I can't see budget users shelling out for more than GDDR5. My 3gb R9 280x performs beautifully, if I was going to upgrade it would be to a HBM gpu.
If I'm reading the chart right, a single chip of GDDR5X - so 1GB of VRAM - can run a 64bit interface at 12Gbps (maybe up to 13, based on the rest of the article). The equivalent GDDR5 set up (running at 6Gbps) would be a 128bit interface and require 4 GDDR5 chips.
Tell a GPU manufacturer they can replace 4 GDDR5 chips with a single GDDR5X chip - meaning less chases, less soldering, and therefore smaller or less complex PCBs - with no drop in performance, and I can see them biting your hand off at the lower end of the market...
Surely this is going to make for some mad, cheap and fast low end cards.
intel HD Graphics with 1GB DDR5X inbuilt.........wow!
85-90% OF HMB bandwidth but far cheaper to upgrade your plant to make it (less tool changes etc) and far cheaper to manufacture a tweak of old process, vs. moving in all new equipment. Meaning far faster to market in volume too. All of which in the end means lower cost, for basically HBM qualities that are already MORE than needed. As you see HBM did nothing for AMD. HBM2 will again do nothing for AMD (or NV) until maybe a rev or two later when they are using 8K or VR heavily etc. We just don't need the mass cost of HBM1/2 right now for what we are doing. They're great stuff for a few revs down the road, but not with 95% of the world using 1080p or less. NV was able to beat HBM with little effort (clock up gddr5 to 7.2ghz etc, increase in compression algorithms to shrink graphics assets etc, all meant NV wasn't bandwidth constrained). HBM1+2 are great tech, don't get me wrong, just not yet. So this is a cheap, fast to roll-out solution that uses mostly the same tech, production lines etc, which unlike AMD's cards, won't drive up price and cause shortages due to HBM2 this time (like they had with hbm1). I really hope AMD goes this way too for at least a first run of their next cards (for speed to market, volume, cheaper), no point in driving up your price (shrinking any profit you get) just for a marketing point. NV is seemingly choosing the "cheaper but all that is needed route", which is going to yield more profit (badly needed by AMD by the way).
As OP noted, wonder what it would do added to an integrated chip...Hmmm.. Either way good stuff, especially low end would get really good bandwidth (something only high end models usually get), so low end discrete definitely rising, which will force the high end to move up too. I like this
That's only true if my assumption about a single chip doing 64b @ 13Gbps is true, and it's also a comparison to HBM1, which is already available in the mass market. More to the point, the first generation of HBM technology is 23% faster than GDDR5X and has been available in mass production for pushing 6 months. HBM2, stack for chip, is over twice as fast as GDDR5X and has (iirc) up to 8x the capacity. GDDR5X isn't competing with HBM1, and it can't compete with HBM2, so comparisons with those are meaningless. It's competing with GDDR5 (and to an extent the DDR3 that turns up on very low end cards), and it could do quite well in that segment if they can get it to market in sufficient volume, and at a low enough price. Whether they can is yet to be seen, though...
Since GDDR5X is a direct replacement for GDDR5 it's as likely to be integrated into a CPU/APU as GDDR5 itself - i.e. not at all.
It'd be interesting to see someone downclock the memory of a fury and a 980ti to see just how memory bandwidth limited they are - it could be that you can't fit enough processing grunt on an economical 16nm die to saturate a GDDR5X pipeline, in which case sticking to the older tech could bring large cost reductions for no performance impact
As you see HBM did nothing for AMD. HBM2 will again do nothing for AMD (or NV) .....PLEASE!! HBM has done alot for AMD, Fury Nano is very small, use less electricity than the 980 but way faster. AMD has proven for the 1st time a low profile card can be equally faster thanks to HBM.
Card manufacturers want more profit as soon as possible. So to use gddr5x means quicker production on some recycled cards or even new. Is alot better than waiting on HBM to be produced. Save the limited supply on high to mid high teir cards.
There are currently 1 users browsing this thread. (0 members and 1 guests)