Read more.And its 8GB HBM2 density components will hit mass production in Q4.
Read more.And its 8GB HBM2 density components will hit mass production in Q4.
Surprised to see them suggest the 2GB cubes will only scale up to 2 cubes/system. I'd've thought they'd be the perfect choice for an 8GB high end GPU...
How did the the time frame of HBM1 mass production announcement fit with the appearance of the Fury boards? I'm trying to work out if it looks like we're going to have to wait for 2017 for high end cards from nvidia/AMD.
Hynix didn't announce mass production of HBM1 until June last year, and the Fury X launched early July. Samsung have already announced mass production of HBM 2 (almost 2 months ago, in fact). Not sure why SK Hynix are behind in the HBM2 stakes (since Samsung are apparently already mass producing 4GB stacks), but unless there are some exclusive supply agreements in place somewhere I don't think it'll affect the availability of high end graphics...
THE same old FURY-X core but with HBM 2.0 with 8GB Memory will be awesome
I guess it can be used if you want to, but they are targeting 2 stacks of 4GB for the 8GB configuration?
Even though 8GB should satisfy every area of the consumer market's needs, surely 1TB/s would be nice on the higher end cards amongst these? I can only imagine due to willy waving the high end is coming with 16GB whether we need it or not!
There was this from last year:
http://hexus.net/tech/news/graphics/...xs-hbm2-chips/
ETA: I'd love to see a single 2GB stack turn up in an APU for VRAM, that'd be fun
QUESTION 1: how much does a 2GB HBM 2.0 chip cost? Question 2: how much electricity does it use compared to DDR4/DDR3?. Question 3: Like the GDDR5 in the PS4 can HBM 2.0 be shared between GPU & CPU? Question 4: can you make a removable HBM 2.0 memory module just like DDR4?
1) They're not out yet so we don't know.
2) Sort of the wrong question to ask - both technologies can use different amounts of power based on what speed they're being asked to run at. HBM is a competitor for graphics ram, not system RAM, so the equivalent is GDDR5, and as the the article shows, HBM is much more power efficient than GDDR5.
3) Yes, it can, should you design it that way.
4) If you wanted to, yes. There doesn't exist a standard for that yet though as far as I know - much of the advantage of HBM is that it can be placed on an interposer next to the GPU. Having to trace between an HBM installation and the GPU/CPU would be cost/space/speed prohibitive I expect.
^ Thanks @ Kalniel.
Disagree with (4)
The whole point of this is to closely couple with the device that needs memory. Placing the ram on a card rather than interposer increases trace lengths to go off chip, across a connector, back onto a chip. Then there is that connector. If you have a 1024 bit wide data bus, you probably need 2048 pins just to route the data. That would probably need LGA/PGA ram to get that many connections at a usable density.
If you want to go off chip and onto a DIMM, then you want DDR4, because that is what it was designed to be good at.
There are currently 1 users browsing this thread. (0 members and 1 guests)