Read more.An important upgrade on a passive bridge chiplet design spotted in previous documents.
Read more.An important upgrade on a passive bridge chiplet design spotted in previous documents.
Any bets on Nvidia laughingly calling out AMD for gluing it's GPUs together, before quietly following suit a few years later?
Well, given that they're working on the same idea, haven't cancelled the research after deciding it won't work, then hidden the few samples that they did produce, like Intel; doubt it.
Besides, Ryzen was such a step forwards in both enhancing performance and mitigating issues with Moore's law, a big hurdle for gigantic monolithic GPUs, it'd be silly to ignore chiplet designs.
Personally, all I need to do is remember how good old Xfire and SLI performance was in its short heyday to get excited.
Responding to only the part in bold.
In the handful of titles they actually worked in. For nothing more than e-peen bragging power as through the FPS was high, it turned out that they weren't rendering the frames equally apart. Of course, all of that is after endlessly faffing trying to get it to work.
However this does look like an exciting angle. Memory in the bridge. Clever use of silicon. More excitingly, should this be released, it will likely become the regular option rather than xfire/SLI and so not suffer from the problems with being a niche technology. Such as endless faffing to get it to work.
"In a perfect world... spammers would get caught, go to jail, and share a cell with many men who have enlarged their penises, taken Viagra and are looking for a new relationship."
"In a perfect world... spammers would get caught, go to jail, and share a cell with many men who have enlarged their penises, taken Viagra and are looking for a new relationship."
Well, that's rather the point yes? With an all-sorted by-default solution, the driver issues, game support issues, developer/engine specificity problems should disappear, and with modern lithography some of the power draw might* go too. Not to mention the extras like you mentioned, intelligent cache and hardware optimization for utilizing more chips. What was possible on ye olde multi GPU was very impressive despite being so patchy and unreliable!
What worries me is that you won't get the control of buying seperate cards or setting them up how you see fit, just like old single-board dual-gpu cards they could end up expensive, rare, hard to get, in a market where even small single gpu parts have been recently MIA.
*Probably not though lol
I think you can get much better yields with chiplets... that is how I look upon it.. instead of having to waste so much on current methods, then being able to jigsaw it together could create some kind of interesting workflow, also, you could, in my logic eyes be able to fabricate precisely what you need for whatever application without having to waiste the extra silicon as well in power not needed.
I don't know... I wont shut out the thing, without AMD today we would not have these technologies as we see popping up everywhere.
Crossfire and SLI usually used alternate frame rendering, which caused stuttering and did absolutely nothing to reduce frame rendering time. No idea why that was the standard. Probably because epeen close to double fps yaaay.
I hope they learned from the past and find a better way to distribute workload. Shared memory should make that easy.
There are currently 1 users browsing this thread. (0 members and 1 guests)