Results 1 to 11 of 11

Thread: Micron GDDR6 production will begin in H2 this year

  1. #1
    HEXUS.admin
    Join Date
    Apr 2005
    Posts
    31,709
    Thanks
    0
    Thanked
    2,073 times in 719 posts

    Micron GDDR6 production will begin in H2 this year

    16GB/s per pin graphics memory will thus arrive earlier than expected.
    Read more.

  2. #2
    Senior Member
    Join Date
    May 2014
    Posts
    2,385
    Thanks
    181
    Thanked
    304 times in 221 posts

    Re: Micron GDDR6 production will begin in H2 this year

    Is this too little too late? After 10 years of stagnation in the GDDR development because there was no need to as there was no viable competition, only with the advent of HBMv1 did they bring out GDDR5X and now that HBMv2 is literally about to drop does a GDDR6 get poked out.

    As GDDR5X was a very Intel-esque "do just enough to be ahead of the curve", I have very little faith in the GDDR6 and is a complete non-starter for me.

  3. #3
    Two Places At Once Ozaron's Avatar
    Join Date
    Jan 2017
    Location
    Sometimes UK
    Posts
    638
    Thanks
    86
    Thanked
    34 times in 33 posts
    • Ozaron's system
      • Motherboard:
      • MSI X570 Unify
      • CPU:
      • Ryzen 3700X
      • Memory:
      • 32GB Patriot Blackout @ 3800 CL16
      • Storage:
      • Toshiba X300 4TB (2), Samsung 850 Evo 500GB
      • Graphics card(s):
      • Sapphire 5700XT, Sapphire R9 Fury Nitro
      • PSU:
      • Seasonic M12-II 620w
      • Case:
      • Corsair Obsidian 500D
      • Operating System:
      • W10 Enterprise 64bit
      • Monitor(s):
      • Gigabyte G27QC
      • Internet:
      • 2.5 MB/s ↓ 0.86 MB/s ↑ ~20ms

    Re: Micron GDDR6 production will begin in H2 this year

    Someone made a comment elsewhere that brought this into perspective, though. At the sacrifice of some die space, GDDR5X is still competitive on speed vs HBM2 and as far as I understand, considerably cheaper.
    Case in point: if Vega 10's prime card has two stacks of HBM2 at 204GB/s, totalling 408GB/s, then it's still behind the GTX1080 which has, to be fair, been out for a bit now. P100's 720GB/s is reliant on using 4 stacks, something that likely won't be available on a mainstream or consumer card for a while.

    Fast, neat little GDDR chips will be good on cheap cards for a while yet, IMO.

  4. #4
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: Micron GDDR6 production will begin in H2 this year

    Quote Originally Posted by Ozaron View Post
    ... At the sacrifice of some die space, GDDR5X is still competitive on speed vs HBM2 and as far as I understand, considerably cheaper. ....
    I'm not convinced it's "considerably" cheaper. It certainly will be cheaper, but if it was that cheap I suspect nvidia would've used it throughout its range. The fact that only the top card in the range gets GDDR5X suggests it's still quite expensive - all the "cheaper" cards are making do with standard GDDR5.... And of course for the total cost of the card you've got to offset the interposer/HBM costs against the simplified PCBs since you don't have to run all those memory traces through them ... I suspect the cost differential really isn't that significant...

    Besides, it's not just "some die space", it's also PCB space and power budget. When AMD were releasing Fury X they were talking about power savings in the region of 20W - 30W: that's 10% of the total power budget, which can either be used to make a lower power card (Nano @ 180W had excellent perf/watt), or ploughed into boosting the GPU clocks and getting higher absolute performance.

    Quote Originally Posted by Ozaron View Post
    Case in point: if Vega 10's prime card has two stacks of HBM2 at 204GB/s, totalling 408GB/s, then it's still behind the GTX1080 ...
    Erm, GTX 1080 peak theoretical memory throughput is 320 GB/s - so Vega's 2-stack HBM2 implementation will have over 25% more bandwidth available...

  5. #5
    Senior Member Xlucine's Avatar
    Join Date
    May 2014
    Posts
    2,162
    Thanks
    298
    Thanked
    188 times in 147 posts
    • Xlucine's system
      • Motherboard:
      • Asus prime B650M-A II
      • CPU:
      • 7900
      • Memory:
      • 32GB @ 4.8 Gt/s (don't want to wait for memory training)
      • Storage:
      • Crucial P5+ 2TB (boot), Crucial P5 1TB, Crucial MX500 1TB, Crucial MX100 512GB
      • Graphics card(s):
      • Asus Dual 4070 w/ shroud mod
      • PSU:
      • Fractal Design ION+ 560P
      • Case:
      • Silverstone TJ08-E
      • Operating System:
      • W10 pro
      • Monitor(s):
      • Viewsonic vx3211-2k-mhd, Dell P2414H
      • Internet:
      • Gigabit symmetrical

    Re: Micron GDDR6 production will begin in H2 this year

    I'm not convinced cost factors into anything about the 1080/70 - looking at how much nvidia wants for them, I think the lack of GDDR5X is an artificial limitation to milk more money from consumers

  6. #6
    Hooning about Hoonigan's Avatar
    Join Date
    Sep 2011
    Posts
    2,322
    Thanks
    172
    Thanked
    445 times in 319 posts
    • Hoonigan's system
      • Motherboard:
      • MSI MEG X570 ACE
      • CPU:
      • AMD Ryzen 7 5800X3D
      • Memory:
      • 32GB Corsair Dominator Platinum RGB
      • Storage:
      • 2x 2TB Gigabyte NVMe 4.0
      • Graphics card(s):
      • MSI RTX 4080 Super GAMING X SLIM
      • PSU:
      • be quiet! Straight Power 11 Platinum 750W
      • Case:
      • Corsair Crystal Series 680X
      • Operating System:
      • Windows 11 x64
      • Monitor(s):
      • Alienware AW3423DWF + ASUS ROG PG279Q
      • Internet:
      • Giganet (City Fibre) 900/900

    Re: Micron GDDR6 production will begin in H2 this year

    Quote Originally Posted by Xlucine View Post
    I think the lack of GDDR5X is an artificial limitation to milk more money from consumers
    But that's just purely speculation. GDDR5X is expensive, but how much more than GDDR5, I'm not sure.
    You also can't hide from the fact that they're producing the fastest graphics cards right now.

  7. #7
    Senior Member Xlucine's Avatar
    Join Date
    May 2014
    Posts
    2,162
    Thanks
    298
    Thanked
    188 times in 147 posts
    • Xlucine's system
      • Motherboard:
      • Asus prime B650M-A II
      • CPU:
      • 7900
      • Memory:
      • 32GB @ 4.8 Gt/s (don't want to wait for memory training)
      • Storage:
      • Crucial P5+ 2TB (boot), Crucial P5 1TB, Crucial MX500 1TB, Crucial MX100 512GB
      • Graphics card(s):
      • Asus Dual 4070 w/ shroud mod
      • PSU:
      • Fractal Design ION+ 560P
      • Case:
      • Silverstone TJ08-E
      • Operating System:
      • W10 pro
      • Monitor(s):
      • Viewsonic vx3211-2k-mhd, Dell P2414H
      • Internet:
      • Gigabit symmetrical

    Re: Micron GDDR6 production will begin in H2 this year

    Quote Originally Posted by Hoonigan View Post
    But that's just purely speculation. GDDR5X is expensive, but how much more than GDDR5, I'm not sure.
    You also can't hide from the fact that they're producing the fastest graphics cards right now.
    Where did I suggest otherwise?

  8. #8
    Two Places At Once Ozaron's Avatar
    Join Date
    Jan 2017
    Location
    Sometimes UK
    Posts
    638
    Thanks
    86
    Thanked
    34 times in 33 posts
    • Ozaron's system
      • Motherboard:
      • MSI X570 Unify
      • CPU:
      • Ryzen 3700X
      • Memory:
      • 32GB Patriot Blackout @ 3800 CL16
      • Storage:
      • Toshiba X300 4TB (2), Samsung 850 Evo 500GB
      • Graphics card(s):
      • Sapphire 5700XT, Sapphire R9 Fury Nitro
      • PSU:
      • Seasonic M12-II 620w
      • Case:
      • Corsair Obsidian 500D
      • Operating System:
      • W10 Enterprise 64bit
      • Monitor(s):
      • Gigabyte G27QC
      • Internet:
      • 2.5 MB/s ↓ 0.86 MB/s ↑ ~20ms

    Re: Micron GDDR6 production will begin in H2 this year

    Quote Originally Posted by scaryjim View Post
    I'm not convinced it's "considerably" cheaper. It certainly will be cheaper, but if it was that cheap I suspect nvidia would've used it throughout its range. The fact that only the top card in the range gets GDDR5X suggests it's still quite expensive - all the "cheaper" cards are making do with standard GDDR5.... And of course for the total cost of the card you've got to offset the interposer/HBM costs against the simplified PCBs since you don't have to run all those memory traces through them ... I suspect the cost differential really isn't that significant...

    Besides, it's not just "some die space", it's also PCB space and power budget. When AMD were releasing Fury X they were talking about power savings in the region of 20W - 30W: that's 10% of the total power budget, which can either be used to make a lower power card (Nano @ 180W had excellent perf/watt), or ploughed into boosting the GPU clocks and getting higher absolute performance.
    Sorry sir, I forgot to do my homework yesterday, but I have it now!

    Firstly, GDDR5X may not have been all that cheap as and when the last range of GPUs arrived from NVidia as it was rather new, but I can bet you the price of that stuff will go down as and when HBM2 becomes viable- which should be very, very soon if mass production is indeed happening and the lack of supply is clearing. If 1080Ti had ever arrived, my bet is that it would have been with a G5X configuration. Also, I'm no expert but it seems to me that the cost of producing some traces across the PCB is pennies at worst, whereas the cost of implementing a very new design for a mainstream consumer card (testing, reliability etc.) plus the lack of supply pushing price up, plus the premium for a space-saving fastest-per-pin memory module would be significant.

    We all know that AMD's cards have been behind the curve for one or two generations at least on power consumption vs performance, and there's no doubting that HBM helps out with this issue. I'm sure that was a calculated eventual benefit for them when investing in the tech to start with. Even now that the second iteration has arrived and both companies are sharing their toys, it still doesn't seem to me that HBM could compete on lower end cards via single stack vs a GDDR6 setup with a couple of modules that comes in at nearly / same / marginally more bandwidth at a lower price. Give it a year or two more, though. At the higher end it's all about performance and the cost is less relevant anyway, HBM is worthwhile.

    Quote Originally Posted by scaryjim View Post
    Erm, GTX 1080 peak theoretical memory throughput is 320 GB/s - so Vega's 2-stack HBM2 implementation will have over 25% more bandwidth available...
    This was a mistake on my part - for some reason I confused the Titan's (480GB/s G5X) config with that of the 1080. Don't spend much time reading the specs for cards I can't afford, anymore.

    Regardless my point was that for a couple more years GDDR memory could still be very relevant. It won't last, sure, but HBM isn't perfect yet.

  9. #9
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: Micron GDDR6 production will begin in H2 this year

    Quote Originally Posted by Ozaron View Post
    ... Regardless my point was that for a couple more years GDDR memory could still be very relevant. ...
    Oh, definitely agree with that. I just don't think GDDR5X is currently a lot cheaper than Vega 10's 2-stack HBM2 implementation. If it was it wouldn't make business sense to use HBM2 at all.

    GDDR5 clearly is a lot cheaper than both 5X and HBM2, which is why all current cards except nvidia's very high end products are using it rather than GDDR5X. When GDDR6 lands GDDR5X might drop in price enough for the RX480 / GTX1060 class cards to use it. But by then there'll also probably be a new generation of HBM coming, potentially making HBM2 cheaper.

    As to power consumption, as I mentioned above the Nano was *very* competitive against the generation-equivalent nvidia cards at stock clocks; it was generally faster than a GTX 980 and had similar power consumption. It was only on the bleeding edge of the clock/performance curve that AMD needed more voltage and started suffering in power terms. That's something that Polaris didn't really fix, but from everything they've put out about Vega it looks to be tuned for higher clock speeds to start with. That should mean more performance at the same power, or the same performance at much lower power, compared to previous designs...

  10. #10
    Anthropomorphic Personification shaithis's Avatar
    Join Date
    Apr 2004
    Location
    The Last Aerie
    Posts
    10,857
    Thanks
    645
    Thanked
    872 times in 736 posts
    • shaithis's system
      • Motherboard:
      • Asus P8Z77 WS
      • CPU:
      • i7 3770k @ 4.5GHz
      • Memory:
      • 32GB HyperX 1866
      • Storage:
      • Lots!
      • Graphics card(s):
      • Sapphire Fury X
      • PSU:
      • Corsair HX850
      • Case:
      • Corsair 600T (White)
      • Operating System:
      • Windows 10 x64
      • Monitor(s):
      • 2 x Dell 3007
      • Internet:
      • Zen 80Mb Fibre

    Re: Micron GDDR6 production will begin in H2 this year

    I doubt GDDR5X will drop much when GDDR6 starts to ship, in fact I am wondering if this gfx memory shift we are currently in is responsible (at least partly) for the recent increases in GPU prices.

    Just how many types of gfx RAM are they manufacturing at the moment? Not long ago cards were either GDDR5 or DDR3. Now we have GDDR5, GDDR5X, HBM, HBM2 and GDDR6 on the horizon.....that's a big fragmentation of the manufacturing facilities.
    Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
    HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
    HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
    Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
    NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
    Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive

  11. #11
    Senior Member
    Join Date
    May 2015
    Posts
    359
    Thanks
    0
    Thanked
    7 times in 7 posts

    Re: Micron GDDR6 production will begin in H2 this year

    Quote Originally Posted by scaryjim View Post
    Quote Originally Posted by Ozaron View Post
    ... At the sacrifice of some die space, GDDR5X is still competitive on speed vs HBM2 and as far as I understand, considerably cheaper. ....
    I'm not convinced it's "considerably" cheaper. It certainly will be cheaper, but if it was that cheap I suspect nvidia would've used it throughout its range. The fact that only the top card in the range gets GDDR5X suggests it's still quite expensive - all the "cheaper" cards are making do with standard GDDR5.... And of course for the total cost of the card you've got to offset the interposer/HBM costs against the simplified PCBs since you don't have to run all those memory traces through them ... I suspect the cost differential really isn't that significant...

    Besides, it's not just "some die space", it's also PCB space and power budget. When AMD were releasing Fury X they were talking about power savings in the region of 20W - 30W: that's 10% of the total power budget, which can either be used to make a lower power card (Nano @ 180W had excellent perf/watt), or ploughed into boosting the GPU clocks and getting higher absolute performance.

    Quote Originally Posted by Ozaron View Post
    Case in point: if Vega 10's prime card has two stacks of HBM2 at 204GB/s, totalling 408GB/s, then it's still behind the GTX1080 ...
    Erm, GTX 1080 peak theoretical memory throughput is 320 GB/s - so Vega's 2-stack HBM2 implementation will have over 25% more bandwidth available...
    It's far cheaper than HBM1, which is why AMD couldn't make money on them, and far easier to implement which is why they just got to mass production for HBM2 which has been holding up vega big time. GDDR5x and getting to market faster is also why NV was able to cash in with records (margins, profit, revenue etc). GDDR5x, and now GDDR6 is using existing equipment with tweaks. HBM required all new tools and lack of use keeps pricing high. BTW, NV is pushing Micron to up GDDR5x massively so they can use it on the ENTIRE refresh coming H2 except the very lowest cards, and then of course we'll see a new GDDR6 top end put out (Q1-Q2 next year? Depends on AMD I guess). So it's cheap enough NV wants it on almost all their cards.

    Also note, if you can't use the bandwidth, what is the point? All 1080 needed was GDDR5x, not HBM. All the memory bandwidth in the world didn't help AMD right? Same story on Vega vs. whatever NV answers with (likely simply faster GDDR5x first, then GDDR6 later). Note HBM2 supposedly has a larger footprint, and requires a larger interposer thus raising costs again. Though we'll have to wait and see if that pans out.

    http://www.anandtech.com/show/9969/jedec-publishes-hbm2-specification
    "The potential of the second-gen HBM seems to be rather high, but the costs remain a major concern."

    Not aware of the above changing. AMD should be going with a memory that doesn't hold up their product or price it to death. NV went mainstream with good enough bandwidth and sold the dickens out of the high-end. That is called good business. Without a major advantage, you shouldn't go with "blue crystals" so to speak Buzzwords don't win benchmarks or races to market, and can often raise prices for no benefit (see HBM1 and HBM2). HBM2 just hit mass production. If AMD chose GDDR5x (perhaps slightly faster chips) they would already be out.

    https://pipedot.org/article/2BF9W
    AMD files gpu theft suit just like NV did ~2014? Not on front page of hexus? Nvidia should have won, and AMD should too. I predicted this suit would come at some point, of course I thought NV should have won and AMD would use that case to build their own. I'm thinking samsung just had better lawyers or more payoffs...LOL. AMD has even less for suits, so expect it turns the same for them unfortunately. Then again they settled out of court, so we have no idea what NV got, but precedent wasn't set for AMD to use (that is unfortunate with a settlement out of court).

    I wonder what happens after Intel's lic runs out shortly for NV stuff.

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •