Read more.Theoretical design envisions chiplets connected by a high bandwidth crosslink.
Read more.Theoretical design envisions chiplets connected by a high bandwidth crosslink.
Makes sense to me, it clearly works on the cpu front and 'gpu's' are turning into more than just graphics cards these days so it only seems logical to me to expand the principle to the gpu front...especially on the 'professional' end of the 'gpu' spectrum.
Aww man, I got really excited for a second about AMD and GPU news... I’m very (im)patiently awaiting news on the 6700 and 6700 XT lol
My first thought was that it would be interesting if this brought about a resurgence in Crossfire/SLI style setups. With the primary chiplet managing resources it might be possible to combine CPU, discrete, dedicated RT cards & external GPU hardware in a good way. Either way it seems like AMD is heading towards offering a system that lets you build a single system with x86, x64, ARM, shader, RT & FPGA cores and plenty of bandwidth & fast interconnects between them all.
Crossfire was seen as 2 separate gpu's, this in the end was too complex to code for (well basically mantle and following low level api's killed it as it moved too much of the complexity from the gpu makers drivers to the application or game, and most game dev were not gonna put that sort of effort in to make it work).
Here the whole point is it is seen as 1 gpu. The problem there is the game dev is not coding it like they have a bunch of little linked but independent gpu's, they just expect it to work as 1 gpu. To do that every core has to work with every other core (sharing memory), and that means it will need to get data stored in one chiplets cache onto another one. No matter what impressive name they call the crosslink we end up with a massive bottleneck vs having everything in one chip.
The best way to fix this is actually know we have a chiplet setup, which you'd do by having a high level graphics api and smart drivers that sorted everything out so it could use the chiplets efficiently. Only AMD stuffed all that when they pushed everyone onto low level API's. Hence while we hear a lot about chiplets not convinced that it'll come to anything for gamers - gpu compute is obviously different.
The problem with doing it on GPUs is the latency, if they can work around that then it's a good path to go down, if however it adds to many nanoseconds to the pipeline it could end up being awful for things like gaming or VR.
For the 'professional' end of the spectrum i guess it would depend, i wouldn't fancy my self-drive car taking an extra 20-30 ns on every decision it's making, if it was making 1k decisions per second and we add 20-30ns to each of those decisions they soon add up.
I wouldn't class 'self driving' as the professional market, they're basically going to end up with custom chips, likely arm or risc based imo, I'm talking about gpu rendering, encoding and scientific stuff etc you might end up doing on a server farm etc.
To be fair though this isn't really all that different from sli, with arguably less latency, and that manages to work quite well in most settings when coded correctly so assuming you get the software support it needs I doubt it would actually be much issue for gaming as well.
If the crosslink does +200GB/s then latency will not be an issue. PCIE-4 link speed to the CPU is way too fast to notice latency when gaming in 8k on a 3090 what about two chiplets next to each other sharing resources?
Corky34 (05-01-2021)
There are currently 1 users browsing this thread. (0 members and 1 guests)