Results 1 to 4 of 4

Thread: PCIe bandwidth vs multiple cards

  1. #1
    Senior Member AGTDenton's Avatar
    Join Date
    Jun 2009
    Location
    Bracknell
    Posts
    2,708
    Thanks
    992
    Thanked
    833 times in 546 posts
    • AGTDenton's system
      • Motherboard:
      • MSI MEG X570S ACE MAX
      • CPU:
      • AMD 5950x
      • Memory:
      • 32GB Corsair something or the other
      • Storage:
      • 1x 512GB nvme, 1x 2TB nvme, 2x 8TB HDD
      • Graphics card(s):
      • ASUS 3080 Ti TuF
      • PSU:
      • Corsair RM850x
      • Case:
      • Fractal Design Torrent White
      • Operating System:
      • 11 Pro x64
      • Internet:
      • Fibre

    PCIe bandwidth vs multiple cards

    Just a query I actually can't find a suitable answer to.

    I've been spoilt for some time with my ageing ASUS WS Super computer motherboard. Four slots are x16 in size & bandwidth, if I fill those 4 slots they remain at x16 bandwidth, they don't drop to x8 bandwidth like on a lot of motherboards. It was one of the very few motherboards of the PCIe Gen 2 era to do this. The other 3 x16 slots then only run at x8 bandwidth, but x16 in size.

    On modern motherboards you tend to be limited to at most three PCIe x16 slots in size and generally only one slot is x16 in bandwidth, once you add just one more GPU card they all drop to x8.
    But what isn't clear is whether the bandwidth drop only occurs when I add two GPU's, or would the bandwidth of Slot 1 remain at x16 if for example I put in a Network/Sound card into Slot 2?

    Taking this motherboard as an example : https://download.asrock.com/Manual/X570%20Taichi.pdf (Section 2.4)

    I'm just toying with a workstation/media spec without going to Threadripper or X299 where this limitation isn't as bad or even apparent.
    It confuses me by how much they advertise the huge amount of PCIe lanes available yet we don't seem to have much access to them...
    What I have noticed in recent years is that they're trying to restrict your use of desktop equipment. For example I tried to use a modern motherboard with Windows Server for some testing yet they blocked the LAN port, so I had to buy a £10 Network card
    I kind of feel forced to go with the more expensive WS components. They've all mostly dropped their Pro range and what is available is fairly weak anyway, just a carbon copy of their gaming boards.

    Cheers

  2. #2
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: PCIe bandwidth vs multiple cards

    I think these days if you care that much about PCIe lanes you are expected to go Threadripper.

    It doesn't matter what sort of card you plug into the second slot, if you use the slot then half the lanes of the first slot get diverted to the second slot to make it work.

    PCIe isn't a parallel bus like ISA or PCI, you get a fixed number of lanes which are then allocated between various slots and devices. Your current board uses Nvidia PCIe bridge chips to connect a pair of x16 slots to a single x16 out of the CPU, so the slots were physically x16 but actually shared bandwidth if you tried to access them at the same time and the bridge chip adds some extra latency. That and the expense of extra silicon, they kind of fell out of fashion.

    I had a similar problem at work. I ended up with some Asus AMD board that got me a pair of x8 slots and a x4 slot which worked for my needs.

  3. Received thanks from:

    AGTDenton (04-09-2021)

  4. #3
    Senior Member AGTDenton's Avatar
    Join Date
    Jun 2009
    Location
    Bracknell
    Posts
    2,708
    Thanks
    992
    Thanked
    833 times in 546 posts
    • AGTDenton's system
      • Motherboard:
      • MSI MEG X570S ACE MAX
      • CPU:
      • AMD 5950x
      • Memory:
      • 32GB Corsair something or the other
      • Storage:
      • 1x 512GB nvme, 1x 2TB nvme, 2x 8TB HDD
      • Graphics card(s):
      • ASUS 3080 Ti TuF
      • PSU:
      • Corsair RM850x
      • Case:
      • Fractal Design Torrent White
      • Operating System:
      • 11 Pro x64
      • Internet:
      • Fibre

    Re: PCIe bandwidth vs multiple cards

    Quote Originally Posted by DanceswithUnix View Post
    I think these days if you care that much about PCIe lanes you are expected to go Threadripper.

    It doesn't matter what sort of card you plug into the second slot, if you use the slot then half the lanes of the first slot get diverted to the second slot to make it work.

    PCIe isn't a parallel bus like ISA or PCI, you get a fixed number of lanes which are then allocated between various slots and devices. Your current board uses Nvidia PCIe bridge chips to connect a pair of x16 slots to a single x16 out of the CPU, so the slots were physically x16 but actually shared bandwidth if you tried to access them at the same time and the bridge chip adds some extra latency. That and the expense of extra silicon, they kind of fell out of fashion.

    I had a similar problem at work. I ended up with some Asus AMD board that got me a pair of x8 slots and a x4 slot which worked for my needs.
    Thanks, because of the price difference I'll just have to suck it up for now. I hear Socket 2066 is potentially getting a replacement, but probably not until next year now so I'll wait for that to see how it stacks up against Threadripper and it might improve prices.

    I hadn't actually appreciated how few PCIe Lanes were available on desktops these days, I thought we would be on much higher by now especially with the shift to PCIe M.2.

    It also explains why SLI & Crossfire have fallen out of fashion though some boards claim quad SLI/Crossfire support, yet don't provide enough physical slots for you to actually utilise it. Thankfully doesn't matter in my case.

  5. #4
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: PCIe bandwidth vs multiple cards

    Quote Originally Posted by AGTDenton View Post
    Thanks, because of the price difference I'll just have to suck it up for now. I hear Socket 2066 is potentially getting a replacement, but probably not until next year now so I'll wait for that to see how it stacks up against Threadripper and it might improve prices.

    I hadn't actually appreciated how few PCIe Lanes were available on desktops these days, I thought we would be on much higher by now especially with the shift to PCIe M.2.

    It also explains why SLI & Crossfire have fallen out of fashion though some boards claim quad SLI/Crossfire support, yet don't provide enough physical slots for you to actually utilise it. Thankfully doesn't matter in my case.
    More lanes will add a noticeable amount to the motherboard cost as routing that many lanes across a PCB can be a bit of a git, hence AMD limiting their low end GPUs to only 8 lanes. To be fair, most people only ever plug a GPU into their PC so more lanes for the majority of people isn't needed.

    I think you are right that the pressure on PCIe lanes is going up, even with newer PCIe versions increasing how much you can wring out of each lane. AM5 looks like it is still only 28 PCIe lanes with 4 of those going to the chipset, so no better than AM4 with anything coming off the chipset limited to x4 speed

    The SLI/Crossfire thing is more just that it no longer really works in modern games.

  6. Received thanks from:

    AGTDenton (05-09-2021)

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •