Read more.And cryptomining blogs suggest that its hash rate make it a bad GPU choice for miners.
Read more.And cryptomining blogs suggest that its hash rate make it a bad GPU choice for miners.
Are there plans for AMD to release a card on a decent memory bus and give us the GPU in its fully capable glory?
The point of the Infinity Cache is that it doesn't need a bigger memory bus to be fully capable. Unless you want to do Etherum mining, of course.Originally Posted by [GSV
Sounds like infinity cache is good news for gamers based on these comments. More please AMD. The sooner Crypto currency dies the better IMO. (Its dreadful for the planet and is really bad for money laundering in case anyone wants to know why.)
AMD have invested a lot of expensive silicon in that cache. Assuming they got their modelling and sizing right when designing the thing, the answer should be sod all beyond this or they got it wrong.
But perhaps there will be an HBM part for professional use, like the one that ended up in Vega VII before.
OTOH, with faster ram they might be able to push for a slimmer 128 bit interface.
The numbers is good...
I sort of agreee, sort of disagree with @DancesWithUnix's analysis. There's a very high chance that removing the Infinity Cache will make performance go backwards. The RX6000 series clock higher than the competition (they boost up to 2GHz, vs 1.7GHz for a 3080, for example), and the only way I can see a GPU being fed at that frequency is via a cache. Going out to GPU memory would only work if the GPU memory was closer to the boost frequencies, which it isn't.Originally Posted by [GSV
So best case is probably no difference, worst case is a drop in performance.
I think the argument wasn't to just remove the cache, of course that will make perf go backwards, but to do so at the same time as increasing the number of memory controllers for a wider bus (together with faster ram perhaps). But as DancesWithUnix pointed out, they wouldn't have gone down the narrow+cache route unless they had already modelled the options and found it was the best solution.
Some of the RT results don't look as bad as I expected!
Makes me wonder if its certain RT effects,and aspects of denoising which are affecting AMD GPUs more??
[GSV]Trig (15-03-2021)
Yeah, I'm not sure GDDR6X is really moving much on - 19-21Gbps vs 18 per chip and a whole bunch of supply/power constraints. A 320bit bus and some top binned GDDR6 chips would be interesting from the red side since the rest of the silicon seems to respond well to extra power. But they've clearly crunched it and come up with narrow+cache instead.
If I'm reading this right, the 6800 has a 50% more shaders to run than the 5700XT with just 33% more bus width to feed them? So this should have plenty, and the higher end parts are the starved ones.
Wide is much better than fast so the real way to feed a GPU is HBM, but people seem to turn their nose up at HBM these days.
There are currently 1 users browsing this thread. (0 members and 1 guests)