Read more.Quote:
And cryptomining blogs suggest that its hash rate make it a bad GPU choice for miners.
Printable View
Read more.Quote:
And cryptomining blogs suggest that its hash rate make it a bad GPU choice for miners.
Are there plans for AMD to release a card on a decent memory bus and give us the GPU in its fully capable glory?
The point of the Infinity Cache is that it doesn't need a bigger memory bus to be fully capable. Unless you want to do Etherum mining, of course.Quote:
Originally Posted by [GSV
Sounds like infinity cache is good news for gamers based on these comments. More please AMD. The sooner Crypto currency dies the better IMO. (Its dreadful for the planet and is really bad for money laundering in case anyone wants to know why.)
AMD have invested a lot of expensive silicon in that cache. Assuming they got their modelling and sizing right when designing the thing, the answer should be sod all beyond this or they got it wrong.
But perhaps there will be an HBM part for professional use, like the one that ended up in Vega VII before.
OTOH, with faster ram they might be able to push for a slimmer 128 bit interface.
The numbers is good...
I sort of agreee, sort of disagree with @DancesWithUnix's analysis. There's a very high chance that removing the Infinity Cache will make performance go backwards. The RX6000 series clock higher than the competition (they boost up to 2GHz, vs 1.7GHz for a 3080, for example), and the only way I can see a GPU being fed at that frequency is via a cache. Going out to GPU memory would only work if the GPU memory was closer to the boost frequencies, which it isn't.Quote:
Originally Posted by [GSV
So best case is probably no difference, worst case is a drop in performance.
I think the argument wasn't to just remove the cache, of course that will make perf go backwards, but to do so at the same time as increasing the number of memory controllers for a wider bus (together with faster ram perhaps). But as DancesWithUnix pointed out, they wouldn't have gone down the narrow+cache route unless they had already modelled the options and found it was the best solution.
Some of the RT results don't look as bad as I expected!
https://hexus.net/media/uploaded/202...9d8b872fe9.jpg
Makes me wonder if its certain RT effects,and aspects of denoising which are affecting AMD GPUs more??
Yeah, I'm not sure GDDR6X is really moving much on - 19-21Gbps vs 18 per chip and a whole bunch of supply/power constraints. A 320bit bus and some top binned GDDR6 chips would be interesting from the red side since the rest of the silicon seems to respond well to extra power. But they've clearly crunched it and come up with narrow+cache instead.
If I'm reading this right, the 6800 has a 50% more shaders to run than the 5700XT with just 33% more bus width to feed them? So this should have plenty, and the higher end parts are the starved ones.
Wide is much better than fast so the real way to feed a GPU is HBM, but people seem to turn their nose up at HBM these days.
Speaking of HBM, the Zen3 EPYC launch had an interview over at AT and there's this titbit:
https://www.anandtech.com/show/16548...t-norrod-milanQuote:
We see more and more interest in using high bandwidth memory, for an on-package solution. I think you will see SKU’s in the future from a variety of companies incorporating HBM, especially for AI. That will initially be fairly specialized to be to be candid, because HBM is extremely expensive. So for most the standard DDR memory, even DDR5 memory, means that HBM is going to be confined initially to applications that are incredibly memory latency sensitive, and then you know, it’ll be interesting to how it plays out over time.
Which implies that even for HPC it is too expensive. I guess HPC would need a lot more than 4GB or 8GB.
As for people turning their noses up at HBM, I though people were just weren't impressed with the 4GB of Fury.
And I guess AMD weren't impressed with allegedly loosing money on Fury and Vega.
Pity as AMD spent a lot of money developing HBM and all they have to show for it is the Wikipedia entries:
https://en.wikipedia.org/wiki/High_B...ry#Development
In fact, I think Nvidia have done better out of it despite not being involved with its development simply because they sell a lot more high-end compute cards where HBM really helps.