Results 1 to 11 of 11

Thread: AMD publishes GPU chiplet design patent

  1. #1
    HEXUS.admin
    Join Date
    Apr 2005
    Posts
    31,709
    Thanks
    0
    Thanked
    2,073 times in 719 posts

    AMD publishes GPU chiplet design patent

    Theoretical design envisions chiplets connected by a high bandwidth crosslink.
    Read more.

  2. #2
    Senior Member
    Join Date
    Aug 2006
    Posts
    2,207
    Thanks
    15
    Thanked
    114 times in 102 posts

    Re: AMD publishes GPU chiplet design patent

    Makes sense to me, it clearly works on the cpu front and 'gpu's' are turning into more than just graphics cards these days so it only seems logical to me to expand the principle to the gpu front...especially on the 'professional' end of the 'gpu' spectrum.

  3. #3
    Senior Member
    Join Date
    Jul 2013
    Posts
    281
    Thanks
    5
    Thanked
    17 times in 13 posts

    Re: AMD publishes GPU chiplet design patent

    Aww man, I got really excited for a second about AMD and GPU news... I’m very (im)patiently awaiting news on the 6700 and 6700 XT lol

  4. #4
    Registered+
    Join Date
    Oct 2020
    Posts
    57
    Thanks
    0
    Thanked
    5 times in 4 posts

    Re: AMD publishes GPU chiplet design patent

    My first thought was that it would be interesting if this brought about a resurgence in Crossfire/SLI style setups. With the primary chiplet managing resources it might be possible to combine CPU, discrete, dedicated RT cards & external GPU hardware in a good way. Either way it seems like AMD is heading towards offering a system that lets you build a single system with x86, x64, ARM, shader, RT & FPGA cores and plenty of bandwidth & fast interconnects between them all.

  5. #5
    Member
    Join Date
    Apr 2019
    Posts
    104
    Thanks
    1
    Thanked
    1 time in 1 post

    Re: AMD publishes GPU chiplet design patent

    Quote Originally Posted by maxopus View Post
    My first thought was that it would be interesting if this brought about a resurgence in Crossfire/SLI style setups. With the primary chiplet managing resources it might be possible to combine CPU, discrete, dedicated RT cards & external GPU hardware in a good way. Either way it seems like AMD is heading towards offering a system that lets you build a single system with x86, x64, ARM, shader, RT & FPGA cores and plenty of bandwidth & fast interconnects between them all.
    Crossfire was seen as 2 separate gpu's, this in the end was too complex to code for (well basically mantle and following low level api's killed it as it moved too much of the complexity from the gpu makers drivers to the application or game, and most game dev were not gonna put that sort of effort in to make it work).

    Here the whole point is it is seen as 1 gpu. The problem there is the game dev is not coding it like they have a bunch of little linked but independent gpu's, they just expect it to work as 1 gpu. To do that every core has to work with every other core (sharing memory), and that means it will need to get data stored in one chiplets cache onto another one. No matter what impressive name they call the crosslink we end up with a massive bottleneck vs having everything in one chip.

    The best way to fix this is actually know we have a chiplet setup, which you'd do by having a high level graphics api and smart drivers that sorted everything out so it could use the chiplets efficiently. Only AMD stuffed all that when they pushed everyone onto low level API's. Hence while we hear a lot about chiplets not convinced that it'll come to anything for gamers - gpu compute is obviously different.

  6. #6
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: AMD publishes GPU chiplet design patent

    Quote Originally Posted by LSG501 View Post
    Makes sense to me, it clearly works on the cpu front and 'gpu's' are turning into more than just graphics cards these days so it only seems logical to me to expand the principle to the gpu front...especially on the 'professional' end of the 'gpu' spectrum.
    The problem with doing it on GPUs is the latency, if they can work around that then it's a good path to go down, if however it adds to many nanoseconds to the pipeline it could end up being awful for things like gaming or VR.

    For the 'professional' end of the spectrum i guess it would depend, i wouldn't fancy my self-drive car taking an extra 20-30 ns on every decision it's making, if it was making 1k decisions per second and we add 20-30ns to each of those decisions they soon add up.

  7. #7
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,023
    Thanks
    1,870
    Thanked
    3,381 times in 2,718 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: AMD publishes GPU chiplet design patent

    Quote Originally Posted by Corky34 View Post
    The problem with doing it on GPUs is the latency, if they can work around that then it's a good path to go down, if however it adds to many nanoseconds to the pipeline it could end up being awful for things like gaming or VR.

    For the 'professional' end of the spectrum i guess it would depend, i wouldn't fancy my self-drive car taking an extra 20-30 ns on every decision it's making, if it was making 1k decisions per second and we add 20-30ns to each of those decisions they soon add up.
    CPUs are more latency sensitive than GPUs, and if they've got chiplets working for those...

  8. #8
    Senior Member
    Join Date
    Aug 2006
    Posts
    2,207
    Thanks
    15
    Thanked
    114 times in 102 posts

    Re: AMD publishes GPU chiplet design patent

    Quote Originally Posted by Corky34 View Post
    The problem with doing it on GPUs is the latency, if they can work around that then it's a good path to go down, if however it adds to many nanoseconds to the pipeline it could end up being awful for things like gaming or VR.

    For the 'professional' end of the spectrum i guess it would depend, i wouldn't fancy my self-drive car taking an extra 20-30 ns on every decision it's making, if it was making 1k decisions per second and we add 20-30ns to each of those decisions they soon add up.
    I wouldn't class 'self driving' as the professional market, they're basically going to end up with custom chips, likely arm or risc based imo, I'm talking about gpu rendering, encoding and scientific stuff etc you might end up doing on a server farm etc.

    To be fair though this isn't really all that different from sli, with arguably less latency, and that manages to work quite well in most settings when coded correctly so assuming you get the software support it needs I doubt it would actually be much issue for gaming as well.

  9. #9
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: AMD publishes GPU chiplet design patent

    Quote Originally Posted by kalniel View Post
    CPUs are more latency sensitive than GPUs, and if they've got chiplets working for those...
    Are they? I thought GPUs were what with frame-time measurements and people saying how if the image in VR isn't updated quickly enough it can cause people to get motion sickness.

  10. #10
    Senior Member
    Join Date
    Jun 2013
    Location
    ATLANTIS
    Posts
    1,207
    Thanks
    1
    Thanked
    28 times in 26 posts

    Re: AMD publishes GPU chiplet design patent

    If the crosslink does +200GB/s then latency will not be an issue. PCIE-4 link speed to the CPU is way too fast to notice latency when gaming in 8k on a 3090 what about two chiplets next to each other sharing resources?

  11. #11
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,023
    Thanks
    1,870
    Thanked
    3,381 times in 2,718 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: AMD publishes GPU chiplet design patent

    Quote Originally Posted by Corky34 View Post
    Are they? I thought GPUs were what with frame-time measurements and people saying how if the image in VR isn't updated quickly enough it can cause people to get motion sickness.
    Yeah way more. In compute you always make sure latency sensitive stuff goes on CPU rather than GPU, which are more focused on bandwidth than latency. I'm not suggesting latency is never a problem on GPU, but it's always a problem on CPU.

  12. Received thanks from:

    Corky34 (05-01-2021)

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •