Read more.It is "twice the speed and density of currently available GDDR5," crows Samsung.
Read more.It is "twice the speed and density of currently available GDDR5," crows Samsung.
Looks like this will triumph compared to HBM2 if the price is right
Old puter - still good enuff till I save some pennies!
Not really; it targets a completely different market, and doesn't address any of the issues that makes HBM a better fit in some circumstances. For instance - the high bitrate is going to mean higher energy consumption, the narrow bus means you're still a LONG way behind the per chip/stack bandwidth of HBM2, and the chip is only 2GB, while HBM2 can already go up to 8GB per stack.
It's a direct replacement for GDDR5, not a competitor to HBM2.
I was being very general Jim. What I mean is that products if the price will right will be GDDR6 as HBM2 is too expensive right now. I'm guessing designing a product that already uses GDDR5 to go onto 6 will be easy. Cue the rebrands!
Old puter - still good enuff till I save some pennies!
That would miss the point.
Twice the bandwidth per pin and twice the capacity, that means you can ship with only 4 memory chips and still hit the magic 8GB to charge top dollar. Fewer chips means fewer PCB layers which drives costs down. However, I think HBM has shown us that raw bandwidth isn't all that matters. I have to wonder if a 1070 with double rate but 128 bit memory bus would suffer from fewer channels that I presume can do their own thing (ie I presume they are unganged).
Both consoles could have waited for this to implement on their hardware, I mean think about it, this would have actually made them more suitable for true 4K
The fact it's a lower voltage will appeal to many, and I'm guessing the yields will be better than HBM2 as well...
Old puter - still good enuff till I save some pennies!
Does that mean that the new memory will be 30% cheaper?There is a claimed 30 per cent manufacturing productivity gain compared to the lower density 20nm 8Gb GDDR5.
Why should we care? This will be overpriced miners stuff.
Could this mean that high resolution gaming becomes cheaper?
Not really; a lot depends on whether current GPUs use their memory allocation as ganged (i.e. one single very wide channel) or unganged (lots of narrow individual channels), and just how much their performance depends on memory access. It's likely that GDDR6 will be more expensive than GDDR5 when it's first introduced, and will be used on top end cards, only filtering down the rest of the stack after a couple of generations...
There are currently 1 users browsing this thread. (0 members and 1 guests)