Page 1 of 4 1234 LastLast
Results 1 to 16 of 50

Thread: Dual Graphics Cards -- Alternating frames

  1. #1
    Member
    Join Date
    Aug 2005
    Posts
    113
    Thanks
    0
    Thanked
    0 times in 0 posts

    Dual Graphics Cards -- Alternating frames

    There's an interesting article on Tom's Hardware that compares the methods of SLI versus CROSSFIRE for dual graphics cards.

    The article mentions the four different "modes" for accomplishing dual graphics cards. Most of the modes slice-up the screen in various ways, and have the two cards inter-communicating a lot -- for a useful speed up of around twenty percent. Which hardly seems worth it. Are people going to buy a special motherboard (SLI or CROSSFIRE enabled) PLUS a second graphics card, just to get, say, a twenty percent improvement in processing power?

    One of the modes, however, gets a full DOUBLING of the processing power! And that's what I want to discuss here. This mode alternates frames -- with one card processing the odd numbered frames, and the other card processing the even numbered frames. Each card processes an entire frame image, but each card is allowed to process twice as long, for twice the processing power per frame. This method is conceptually simple and easy to accomplish. This would seem to be a great solution to doubling the processing power.

    One drawback is extremely minor, in my view. That is, this method adds a delay of one frame (or nominally 1/60th of a second), which is unnoticable. The frames are still re-computed and updated 60 times per second for smooth action on your screen.

    The only other drawback is given in the article. The article says this mode does not work on games that use the "render to texture function" -- which I don't understand.
    1. What is the "render to texture function"?
    2. What games (or what percentage of good games) use it?
    3. And why isn't it compatible with alternating frames?
    4. Would game developers perhaps dis-continue their use of this function, if it meant an easy doubling of processing power for dual graphics cards?

  2. #2
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    29,024
    Thanks
    1,478
    Thanked
    2,905 times in 2,354 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel Xeon X5680
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps
    -Render to texture basically creates a 3d scene onto a texture, which you can then put on an object like any other texture.
    -Driving games, flight sims etc. often use this to correctly project scenes onto rearview mirrors etc. CCTV style images that show what's happening down a corridor etc.
    -It doesn't probably doesn't work with AFR because the texture probably isn't created afresh every frame, rather you maintain some information from previous renders or textures created in a different frame. What AFR doesn't do is allow information to be passed over from frame to frame (because this card might not have rendered that frame or buffer)
    -Nope. It sounds (to me, the amateur) like an incredibly useful function - rendering a scene which can then be wrapped onto something else. There are other ways of doubling power that don't involve AFR for these situations

  3. #3
    Member
    Join Date
    Jul 2005
    Location
    Cardiff
    Posts
    156
    Thanks
    2
    Thanked
    0 times in 0 posts
    For me personally, SLI was never about getting more frames per second for my money - that was just a nice addition. I wanted to be able to bump up the image quality at 1920x1200 without a dramatic loss of frame rate.
    Last edited by BubbySoup; 01-11-2005 at 02:53 PM.

  4. #4
    Member
    Join Date
    Aug 2005
    Posts
    113
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by kalniel
    -Render to texture basically creates a 3d scene onto a texture, which you can then put on an object like any other texture.
    -Driving games, flight sims etc. often use this to correctly project scenes onto rearview mirrors etc. CCTV style images that show what's happening down a corridor etc.
    -It doesn't probably doesn't work with AFR because the texture probably isn't created afresh every frame, rather you maintain some information from previous renders or textures created in a different frame. What AFR doesn't do is allow information to be passed over from frame to frame (because this card might not have rendered that frame or buffer)
    I see. Thanks! So the render to texture function would be used, say, to pass a scene (or portion of a scene) from one frame into the next frame, for example to display in the rear view mirror of a car. Thus, this information is re-used from the previous frame -- delayed by one frame.

    If so, then why couldn't just that texture be passed from one graphics card to the other (over the PCI-express bus or over the special interconnect between the two cards) only when it is needed rather then pass all the other image stuff all the time. Wouldn't that be a more efficient implimentation than using the other modes (the non-AFR modes)?

    There are other ways of doubling power that don't involve AFR for these situations
    I'm new to this. But it currently seems that without alternating frames (i.e., by using the non-AFR modes) there is a great deal of inter-communication and inter-coordination required between the graphics cards, and it ends up producing only a marginal improvement in processing power (on the order of 20% improvement, rather than a full doubling of processing power). I don't yet see the non-AFR modes generally doubling the processing power. Can you explain?

  5. #5
    Member
    Join Date
    Aug 2005
    Posts
    113
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by BubbySoup
    For me personally, SLI was never about getting more frames per second for my money - that was just a nice addition. I wanted to be able to bump up the image quality at 1920x1200 without a dramatic loss of frame rate.
    I'm coming to a similar conclusion. That is, SLI is about getting better image quality, with no gains, or only marginal gains, in frame rate. This also seems to be the case with the new dual-16lane PCI-express SLI motherboards. The improvement shows up, not as better frame rates, but rather as improved image quality (in AA, etc.) because of the vast amount of data slammed back and forth over the 16 lanes of PCI-express bus. Imagine that! They more than filled up the dual-8lane PCI-express traffic!!! And the improvement was gained just in image quality. That shows how vastly much data gets slammed back and forth. As I was saying earlier, the usual modes (which are non-AFR) expend a lot of resources slamming data back and forth between cards and inter-coordinating the two cards. I'm not yet convinced it's worth it.

  6. #6
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    29,024
    Thanks
    1,478
    Thanked
    2,905 times in 2,354 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel Xeon X5680
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps
    You're quite right - a lot of stuff does get passed between the cards, but the main chunk of the cards don't actually process it, instead there's just a compositing chip that sticks the pieces back together. For example if one card renders the top and one the bottom then all the chip has to do is display the top half pixels from one card and the bottom half from the other - there's no 3d processing going on, it's just taking the screen outputs and joining them.

    Passing a texture between the two would require that texture to be fed into the processing of the second card, where it would be embedded in the scene and subject to all the post processing, lighting passes etc. Which strikes me as being a bit complex!

    Split screen rendering produces a performance increase not dissimilar to AFR a.f.a.i.k. It is just not as simple to implement on any game.

    Most multi card setups do not rely on the PCI bus exlusively - both NVidia and ATI have their own connectors between the cards in addition to the PCI bus.

  7. #7
    Member
    Join Date
    Aug 2005
    Posts
    113
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by kalniel
    You're quite right - a lot of stuff does get passed between the cards, but the main chunk of the cards don't actually process it, instead there's just a compositing chip that sticks the pieces back together. For example if one card renders the top and one the bottom then all the chip has to do is display the top half pixels from one card and the bottom half from the other - there's no 3d processing going on, it's just taking the screen outputs and joining them.
    I believe it's more complicated than merely passing the upper and lower half of the screen and joining them together. For example, many objects and textures visually pass across the mid screen boundary, and thus so would their shading and AA and AF processing -- which would require extra inter-communication and inter-coordination between the cards.

    Also, in many scenes the required processing power is not the same on the two halfs of the screen. So one card would set idle, while the other card finishes up. There are methods for dynamic "load balancing" which attempts to predict the load and equalize the amount of work done by the two cards, but this again requires extra inter-communication and inter-coordination between the cards.

    Alternating frames (the AFR mode) doesn't have any of those drawbacks.

    Passing a texture between the two would require that texture to be fed into the processing of the second card, where it would be embedded in the scene and subject to all the post processing, lighting passes etc. Which strikes me as being a bit complex!
    I don't see that as being any more complex. That is, in AFR mode, processing the texture (resulting from the "render to texture" function) would remain the same whichever card does it. The main difficulty is merely moving the texture from one card to the other, which is not so difficult. (Again, this would not have to be done all the time, but only when the game uses a "render to texture" function.)

    Whereas in non-AFR mode (split-screen mode), the two half-screens must be communicated and combined together, then the render-to-texture function is applied, then the resulting texture must be communicated to both cards for use in the next frame. I don't see that as being any faster or simpler than AFR mode.

    Most multi card setups do not rely on the PCI bus exlusively - both NVidia and ATI have their own connectors between the cards in addition to the PCI bus.
    That's correct. There is a lot of data being slammed between the two cards. So much so that the Dual-8-lane PCI-express bus is too bandwidth limited to handle the higher image qualities (high AA at high resolution) -- so that is where the new Dual-16-lane motherboards show an improvement. That indicates an enormous amount of data being slammed between the cards. But it isn't needed in AFR mode.

    SLI/Crossfire is still a mystery to me, and I'm struggling to see the real advantages of the non-AFR modes. They seem more complicated, and end up showing a modest improvement (~20% improvement), rather than doubling the real processing power. I appreciate any help you can give in understanding this.

  8. #8
    Member
    Join Date
    Aug 2005
    Posts
    113
    Thanks
    0
    Thanked
    0 times in 0 posts
    Another interesting thought. The mode of using Alternating Frames (the AFR mode) requires far less intercoordination between the two graphics cards -- and that raises the following (perhaps naive) possibility. The AFR mode makes it easier to use two graphics cards that are not identical, perhaps not even the same model.

    The most difficult part of it (conceptually) is to have some standard method for transfering the textures (resulting from the "render to texture function) from one card to the other.

    The alternation of frames itself is (conceptually) separate from the inner workings of the graphics card. (Basically, the hardware is just grabbing a frame from one card, and then the other, back an forth.)

    Imagine (for the sake of discussion) that one of the graphics cards has AA or AF, and the other doesn't. In AFR mode, everything would still 'work', and you'd get a respectable image, and the human eye would average the alternating frames together (that is, the human eye would scarcely see the flicker of alternating images 60 per second, rather it would see the average of the two).

    I pose these questions in my quest to further understand the advantages/disadvantages of the dual graphics modes.

  9. #9
    Senior Member
    Join Date
    Sep 2005
    Posts
    390
    Thanks
    3
    Thanked
    2 times in 2 posts
    Quote Originally Posted by Artic_Kid

    I don't see that as being any more complex. That is, in AFR mode, processing the texture (resulting from the "render to texture" function) would remain the same whichever card does it. The main difficulty is merely moving the texture from one card to the other, which is not so difficult. (Again, this would not have to be done all the time, but only when the game uses a "render to texture" function.)
    I think the problem with this is that the comunication is notmally just to stick together parts of a frame that has all be processed allready. Your suggestion would mean a textured being transfered to noot just to the compiling chip but into a frame being processed by gpu, this would get a lot more complex and could be a problem as the current gpus are not designed to have textures "sent to them for integration into an existing frame. Thats what i reckon it could be, but I am sure there is a good reason that the manufacturers have not implemented it.

  10. #10
    Member
    Join Date
    Aug 2005
    Posts
    113
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by fredered
    .... the current gpus are not designed to have textures "sent to them for integration into an existing frame."
    The "render to texture function" (as it has been illuminated so far on this thread ...) flattens a complex scene into a single texture for transmission into the next frame. (say, for display in the rear-view mirror of a car, or for display on a security guard's tv-monitor (within the game). Current GPUs are already designed to do that.

    This communication of a texture from one frame to the next is said (?) to be the reason why the "render to texture function" does not work in alternating frames mode (AFR mode). I am questioning that reasoning -- because I see little difference between passing a texture from one frame to the next frame (on one graphics card), versus passing the same texture from one graphics card to another (for use in the next frame) -- the only difference, conceptually, is the passing of the texture from one card to another, which is not difficult.

    .... but I am sure there is a good reason that the manufacturers have not implemented it.
    I'm not so sure. Given all the hype from the graphics manufacturers, I'm not exactly trusting them on this matter. I'd like to understand the Dual graphics card (SLI/CROSSFIRE) issue better, and I hope someone here can illuminate it. I'm here contrasting the full-doubling of processing power achieved in AFR mode, compared to the ~20% improvement seen in non-AFR mode.

    Can anyone here explain this?

  11. #11
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    29,024
    Thanks
    1,478
    Thanked
    2,905 times in 2,354 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel Xeon X5680
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps
    I don't know where you are getting this 20% figure. Most games I've seen that are limited by gfx power are getting nearly 100% improvment with SLI - see http://www.firingsquad.com/hardware/...nce/page10.asp

    for an example (hint - scroll down to the higher resolutions where gfx power becomes limiting).

  12. #12
    Member
    Join Date
    Aug 2005
    Posts
    113
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by kalniel
    I don't know where you are getting this 20% figure. Most games I've seen that are limited by gfx power are getting nearly 100% improvment with SLI - see http://www.firingsquad.com/hardware/...nce/page10.asp

    for an example (hint - scroll down to the higher resolutions where gfx power becomes limiting).
    You make a good point. But there's more to the story. For example, PCSTATS did benchmark comparisons of the 7800GT in single versus SLI configuration for the game FarCry at 1600x1200 resolution, with various image-quality settings (AA, AF, etc.) (This high resolution with added AA and AF is surely not CPU limited, and therefore we should see a full measure of the improvement due to SLI.) But in this case the frame rate improvement due to SLI ranged between 11 percent and 54 percent, with an average of only 29 percent improvement. That led the reviewers to comment, "It's funny to see how little SLI helps out when AA+AF is enabled at its max. The power of marketing is strong in this one..."

    In their benchmark data, the SLI configuration had the most problems with the highest levels of filtering (16xAF, and especially 8xAA) -- which is precisely the situation where the two graphics cards must share (and swap) the most data. I suspect this increased swapping of data is slowing down the SLI configuration and reducing its efficiency.

  13. #13
    Senior Member
    Join Date
    Jan 2005
    Location
    Manchester
    Posts
    2,881
    Thanks
    67
    Thanked
    174 times in 130 posts
    • Butcher's system
      • Motherboard:
      • MSI Z97 Gaming 3
      • CPU:
      • i7-4790K
      • Memory:
      • 8 GB Corsair 1866 MHz
      • Storage:
      • 120GB SSD, 240GB SSD, 2TB HDD
      • Graphics card(s):
      • MSI GTX 970
      • PSU:
      • Antec 650W
      • Case:
      • Big Black Cube!
      • Operating System:
      • Windows 7
    Hmm, a few things to comment on (I'm a game dev BTW ):

    Why not just send the RT (render target) texture from one frame to the other in AFR?

    The main problem is that by the time card A generates said texture, card B is already halfway through it's frame. If you're doing AFR the rendering is overlapped - card B starts the next frame before card A finishes current frame thus making passing results from one frame to the next very difficult if not impossible.
    The other problem is they're often huge and you suck up a lot of bandwidth sending textures about.

    Why is this not a problem is SFR modes?

    Because both cards render the texture independently. The reason you see comparitively modest gains with SFR is that both cards take all the geometry and transform it all. They also render all the RT textures they need to render their part of the scene. There's a fair amount of duplication of work here which reduces the performance gain.

    How do you handle an object crossing the middle of the screen (or whever the split is?

    Since both cards transform all the geometry and have all the textures and shaders on them, they can render half an object as easily as the whole thing. So each card renders half of the object. It's not quite twice as fast to render half, but it's not far off as a lot of the time is taken in the pixel processing side of things, which is only done for pixels actuially drawn.


    Did I miss anything obvious there? Or does that cover your questions?

  14. #14
    Member
    Join Date
    Aug 2005
    Posts
    113
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by Butcher
    I'm a game dev BTW
    Thanks Butcher, I need the insight of a game developer like yourself.

    Why not just send the RT (render target) texture from one frame to the other in AFR?

    The main problem is that by the time card A generates said texture, card B is already halfway through it's frame. If you're doing AFR the rendering is overlapped - card B starts the next frame before card A finishes current frame thus making passing results from one frame to the next very difficult if not impossible.
    Good insight. Thanks. Though I can see many cases where passing textures in Alternating Frames (AFR mode) would still work.

    For example, if card A generates a given texture at, say, 90% of the way through its frame, and card B uses that texture sometime after 40% of the way through its frame, then passing the texture between the two cards would work. (Likewise the figures 30% and 80%, respectively, would also work. As would 20% and 70%, etcetera. So long at the two figures differ by at least 50% in the proper direction, it would work.)

    I suspect an easy way to implement this would be as follows. Simply have the GPUs use the passed texture last (or a late as possible) in the process. So long as it is used after the 50% point in the frame, then this method will always work. (And that's for the worst case where the texture is passed at the very last moment of the frame. If the texture is passed earlier in the frame, then it can be used earlier in the next frame by the other card.) I suspect this protocol is not so difficult to do.

    The other problem is they're often huge and you suck up a lot of bandwidth sending textures about.
    The way it's been described to me (in the thread above previously), the texture being passed is usually not large, say, an image to appear in the rear view mirror of a car, or on a surveillance television in a scene. My impression is that such textures would be the only thing passed between the cards for processing purposes, and so would require less bandwidth than the split screen method (SFR).

    Why is this not a problem in SFR modes?

    Because both cards render the texture independently. The reason you see comparitively modest gains with SFR is that both cards take all the geometry and transform it all. They also render all the RT textures they need to render their part of the scene. There's a fair amount of duplication of work here which reduces the performance gain. (emphasis added)
    That hits the nail on the head. It gets to the heart of my concerns here. SLI-SFRmode seems to often give a minimal performance gain -- so much so that I'm wondering whether its worth it. For example, (as referenced in my previous post), on a 7800GT, the game Far Cry at 1600x1200 res, and 8xAA (with or without AF!) gets only an eleven percent improvement in frame rate from SLI! Is that worth the extra cost of an SLI mobo plus a second graphics card (plus the extra power, heat, fans, noise, and hastling with special drivers)? In addition, SLI does nothing for games that are CPU limited. So there's only a narrow range of circumstances where SLI-SFRmode seems worth it.

    I am thinking that AFR mode (and games that can use it) is where you get the real advantage of SLI -- a true doubling of processing power.

    I am starting to wonder if (without Alternating Frames, AFR mode, and the games that can use it) it's better just to put your money into a single, fast, GPU board, and forget about SLI.

    Any thoughts on that?
    Last edited by Artic_Kid; 30-11-2005 at 05:19 AM.

  15. #15
    Senior Member kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    29,024
    Thanks
    1,478
    Thanked
    2,905 times in 2,354 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte X58A UD3R rev 2
      • CPU:
      • Intel Xeon X5680
      • Memory:
      • 12gb DDR3 2000
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell U2311H
      • Internet:
      • O2 8mbps
    Quote Originally Posted by Artic_Kid
    The way it's been described to me (in the thread above previously), the texture being passed is usually not large, say, an image to appear in the rear view mirror of a car, or on a surveillance television in a scene. My impression is that such textures would be the only thing passed between the cards for processing purposes, and so would require less bandwidth than the split screen method (SFR).
    I omitted a rather more modern use of render to texture - rendering a cube reflection map.

    You create a cube with 6 sides made up of textures from a scene rendered in 6 orientations. This is 'placed' around an object and light reflected from it to give you the impression the object is reflecting the area it's in. The resolution of the tecture obviously depends on the size of the object, but you also need 6 of them.

  16. #16
    Senior Member
    Join Date
    Jan 2005
    Location
    Manchester
    Posts
    2,881
    Thanks
    67
    Thanked
    174 times in 130 posts
    • Butcher's system
      • Motherboard:
      • MSI Z97 Gaming 3
      • CPU:
      • i7-4790K
      • Memory:
      • 8 GB Corsair 1866 MHz
      • Storage:
      • 120GB SSD, 240GB SSD, 2TB HDD
      • Graphics card(s):
      • MSI GTX 970
      • PSU:
      • Antec 650W
      • Case:
      • Big Black Cube!
      • Operating System:
      • Windows 7
    Cube maps are a large part of render to texture operations. And they're very big and expensive. Another thing you might end up doing is doing something like a cloth simulation using the GPU. This can output a relatively large amount of data which is needed on the next frame.

    Quote Originally Posted by Artic_Kid
    For example, if card A generates a given texture at, say, 90% of the way through its frame, and card B uses that texture sometime after 40% of the way through its frame, then passing the texture between the two cards would work. (Likewise the figures 30% and 80%, respectively, would also work. As would 20% and 70%, etcetera. So long at the two figures differ by at least 50% in the proper direction, it would work.)

    I suspect an easy way to implement this would be as follows. Simply have the GPUs use the passed texture last (or a late as possible) in the process. So long as it is used after the 50% point in the frame, then this method will always work. (And that's for the worst case where the texture is passed at the very last moment of the frame. If the texture is passed earlier in the frame, then it can be used earlier in the next frame by the other card.) I suspect this protocol is not so difficult to do
    The problem is you'd have to code the game with SLI in mind to do that. Most (currently) are not.
    Also it's not neccessarily feasible to do this amount of arrangement. There are certain things you have to do in certain orders for them to work correctly. Rendering last isn't usually an option. E.g. if the game is using HDR then it will have to perform tone mapping and such at the end of the frame also the alpha blended objects are generally draw after opaque which adds more dependencies. And of course any FSAA is always done last. These sorts of dependencies will often push the use of the rendered texture forwards in the frame and make overlap inpossible.

Page 1 of 4 1234 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Replies: 5
    Last Post: 15-03-2005, 04:21 AM
  2. Molex connectors on graphics cards
    By YorkieBen in forum Graphics Cards
    Replies: 5
    Last Post: 07-02-2005, 12:46 PM
  3. Dual Monitor Graphics Card
    By hibby in forum Graphics Cards
    Replies: 4
    Last Post: 20-12-2004, 01:09 AM
  4. XFX dual dvi 6800GT corrupt graphics
    By KDH in forum Graphics Cards
    Replies: 4
    Last Post: 17-10-2004, 06:16 PM
  5. Does my graphics card supports dual monitors???
    By icp222 in forum Graphics Cards
    Replies: 10
    Last Post: 31-08-2004, 01:11 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •