Page 1 of 2 12 LastLast
Results 1 to 16 of 19

Thread: News - CUDA 5 - Kepler at its best

  1. #1
    HEXUS.admin
    Join Date
    Apr 2005
    Posts
    31,709
    Thanks
    0
    Thanked
    2,073 times in 719 posts

    News - CUDA 5 - Kepler at its best

    Software catches-up with the hardware.
    Read more.

  2. #2
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,024
    Thanks
    1,871
    Thanked
    3,382 times in 2,719 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: News - CUDA 5 - Kepler at its best

    Poor Hexus. We already knew all this back when Kepler was released. For eg, this article back in may:

    http://www.theregister.co.uk/2012/05...led/page2.html

    Quote Originally Posted by theregister

    Quote Originally Posted by hexus
    These new features excite us and, we expect to see some serious practical usage in gaming next year with the release of new high-end consoles.
    I didn't think the next set of consoles had nVidia chips in, does Hexus know something we don't?

    If you're serious about PC gaming then these features shouldn't excite you, only give cause to worry that the already small market is going to be fragmented even further. We need work on cross-vendor features like openCL instead.

    And highlighting old features just after an AMD driver release (I don't recall Hexus doing the reverse after nVidia driver release news) smacks of influence in Hexus' decision making. Come on, you're better than that.
    Last edited by kalniel; 24-10-2012 at 11:40 AM.

  3. #3
    Team HEXUS.net
    Join Date
    Jul 2003
    Posts
    1,396
    Thanks
    75
    Thanked
    411 times in 217 posts

    Re: News - CUDA 5 - Kepler at its best

    I don't agree with your thinking on this one, Kalniel.

    CUDA 5 was officially launched last week so it makes sense to cover it some way.

    Also, NVIDIA rolled out new beta GeForce drivers yesterday that offer up to 15 per cent extra performance, though the majority of gains are sub-five per cent.

    http://www.geforce.com/whats-new/articles/nvidia-geforce-310-33-beta-drivers-released/

    We took a good look at the improvements and decided that, on balance, the gains weren't significant enough for a full-on analysis, per Catalyst 12.11.

    Take a look at the amount of AMD vs. NVIDIA coverage in the last few weeks, too.

  4. #4
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,024
    Thanks
    1,871
    Thanked
    3,382 times in 2,719 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: News - CUDA 5 - Kepler at its best

    Thanks for the reply Tarinder. I was surprised there was no coverage of the nVidia drivers - that would have been more consistent given past coverage and appropriate IMHO.

    On the other hand you make the point in the article that it's only fair to cover CUDA 5 given the AMD driver news. It doesn't make sense to me to only cover it now and in some way as a response the the driver news.

    Nor does it make sense to me to claim that you'll see this in new gaming consoles or imply that "It's highly expected that all next-gen AAA game engines will utilise this form of acceleration in one way or another.". it's only if you carefully look at the exact wording that you see you might talking about general GPU compute rather than CUDA, when the whole tone of the article is closely tied into nVidia's kepler.

  5. #5
    Senior Member
    Join Date
    Jun 2004
    Location
    Kingdom of Fife (Scotland)
    Posts
    4,991
    Thanks
    393
    Thanked
    220 times in 190 posts
    • crossy's system
      • Motherboard:
      • ASUS Sabertooth X99
      • CPU:
      • Intel 5830k / Noctua NH-D15
      • Memory:
      • 32GB Crucial Ballistix DDR4
      • Storage:
      • 500GB Samsung 850Pro NVMe, 1TB Samsung 850EVO SSD, 1TB Seagate SSHD, 2TB WD Green, 8TB Seagate
      • Graphics card(s):
      • Asus Strix GTX970OC
      • PSU:
      • Corsair AX750 (modular)
      • Case:
      • Coolermaster HAF932 (with wheels)
      • Operating System:
      • Windows 10 Pro 64bit, Ubuntu 16.04LTS
      • Monitor(s):
      • LG Flattron W2361V
      • Internet:
      • VirginMedia 200Mb

    Re: News - CUDA 5 - Kepler at its best

    There's clear potential for accelerating scientific simulations, media encode/decode but also, games, where sometimes a custom algorithm is needed to provide a new visual effect or simulation of weather systems, which require massive parallelism.
    I bought an example of this a while ago. There was a special offer on Just Cause 2 on the PC (very, very cheap!) so given I loved that game on the XBox, I bought it for my PC. I was pretty unhappy then when I was unable to get anything like a decent frame rate without resorting to sub-console graphics levels.

    However, when I tried the option to "run the water simulation on the GPU instead of CPU" (not the exact description, but close enough) the difference was staggering. Not only were the graphics now very smooth, but I was also able to ramp up the resolution to 1080p and hit the high AA and AF settings.

    I know that AMD Phenom II's aren't exactly powerhouses these days, but I didn't expect that moving one game aspect from CPU to my (now elderly) GF460 would make such a marked difference.

    My point being that although CUDA and OpenCL don't seem relevant to gamers at the moment (as the article says) they might well be increasingly so as GPU's get more and more powerful. Personally though I think it's a shame that NVidia decided to do their own thing rather than get behind OpenCL - fragmentation = bad!

    Career status: still enjoying my new career in DevOps, but it's keeping me busy...

  6. #6
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: News - CUDA 5 - Kepler at its best

    Quote Originally Posted by crossy View Post
    I bought an example of this a while ago. There was a special offer on Just Cause 2 on the PC (very, very cheap!) so given I loved that game on the XBox, I bought it for my PC. I was pretty unhappy then when I was unable to get anything like a decent frame rate without resorting to sub-console graphics levels.

    However, when I tried the option to "run the water simulation on the GPU instead of CPU" (not the exact description, but close enough) the difference was staggering. Not only were the graphics now very smooth, but I was also able to ramp up the resolution to 1080p and hit the high AA and AF settings.

    I know that AMD Phenom II's aren't exactly powerhouses these days, but I didn't expect that moving one game aspect from CPU to my (now elderly) GF460 would make such a marked difference.

    My point being that although CUDA and OpenCL don't seem relevant to gamers at the moment (as the article says) they might well be increasingly so as GPU's get more and more powerful. Personally though I think it's a shame that NVidia decided to do their own thing rather than get behind OpenCL - fragmentation = bad!
    Potentially it looks good,but it does worry me at times, on whether AMD or Nvidia do make sure things look better with running stuff on the GPU as opposed to the CPU. Look at PhysX for example - Nvidia made sure it used inefficient x87 paths and was single threaded in the past when run on a Windows PC,even though there are paths which are more efficient. However,on consoles a much more efficient path is used.
    Last edited by CAT-THE-FIFTH; 24-10-2012 at 07:06 PM.

  7. #7
    Registered User
    Join Date
    Mar 2012
    Posts
    13
    Thanks
    0
    Thanked
    0 times in 0 posts
    • noname98's system
      • Motherboard:
      • Asus P8Z77-V
      • CPU:
      • 3570k
      • Memory:
      • 8GB Corsair Vengeance 1600
      • Storage:
      • 1TB Barracuda
      • Graphics card(s):
      • MSI GTX 660 ti Power Edition
      • PSU:
      • CX600
      • Case:
      • NZXT Phantom
      • Operating System:
      • Windows 7 & Mountain Lion
      • Internet:
      • 60Mb Down 6Mb Up

    Re: News - CUDA 5 - Kepler at its best

    Now since these drivers are out maybe we can get some decent GPGPU results from our kepler cards, I just wonder how long it will take programs like creative suite to adapt and since the drivers are officially out there should be some improvements so maybe you could do what you did for catalyst 12.8 with CUDA 5 but instead of games you could do Open CL and CUDA programs.

  8. #8
    Senior Member
    Join Date
    Dec 2008
    Posts
    528
    Thanks
    23
    Thanked
    42 times in 35 posts

    Re: News - CUDA 5 - Kepler at its best

    Quote Originally Posted by kalniel View Post
    Thanks for the reply Tarinder. I was surprised there was no coverage of the nVidia drivers - that would have been more consistent given past coverage and appropriate IMHO.

    On the other hand you make the point in the article that it's only fair to cover CUDA 5 given the AMD driver news. It doesn't make sense to me to only cover it now and in some way as a response the the driver news.

    Nor does it make sense to me to claim that you'll see this in new gaming consoles or imply that "It's highly expected that all next-gen AAA game engines will utilise this form of acceleration in one way or another.". it's only if you carefully look at the exact wording that you see you might talking about general GPU compute rather than CUDA, when the whole tone of the article is closely tied into nVidia's kepler.
    In context with the paragraph containing the AAA statement, I'm happy that it's referring to gpgpu compute in general, though, it's fairly likely that engines such as Unreal 4 will directly utilise CUDA (in fact I know the CryENGINE team has been hiring in this dept. for a while now), even if not for a console and so, any ambiguity isn't completely unfounded. Having said this I've now clarified this point again at the end of the article.

    Indeed, we knew well of the dynamic parallelism feature when the card was first launched, however CUDA 5 is the first official release to support this hardware capability, which was the primary basis for the article subtitle: "Software catches-up with the hardware.".

    I hope this clears up the intent of the article. As to why now? CUDA 5 is less than a week old and was perhaps the most significant step forward on NVIDIA's part in the same time-frame as AMD's driver release.

  9. #9
    Member
    Join Date
    Sep 2012
    Location
    North West
    Posts
    137
    Thanks
    15
    Thanked
    4 times in 3 posts
    • Obscurity's system
      • Motherboard:
      • Gigabyte Z77X-D3H
      • CPU:
      • i5 3570K @ 4.6Ghz
      • Memory:
      • 16Gb Corsair Vengeance 1600MHz
      • Storage:
      • 360Gb across 2 SSD's. 2TB HDD
      • Graphics card(s):
      • MSI 7850 2GBDDR5
      • PSU:
      • Corsair CX500
      • Case:
      • Corsair 200R
      • Operating System:
      • Windows 7 Pro 64 bit
      • Monitor(s):
      • Iiyama ProLite E2409HDS. Iiyama XB2380HS
      • Internet:
      • 60Mb/s

    Re: News - CUDA 5 - Kepler at its best

    Certainly be nice to see some GPU upgrades without having to pay anything
    Current specs:
    CPU: Intel i5 3570k Overclocked @ 4.6Ghz GPU: MSI Twin Frozr 7850 @ 1000Mhz Cooler: Arctic Cooling Freezer 13 RAM: 16Gb Corsair Vengeance 1600Mhz
    Motherboard: Gigabyte GA Z77X-D3H

  10. #10
    Member
    Join Date
    Jul 2012
    Location
    Sussex
    Posts
    112
    Thanks
    13
    Thanked
    9 times in 8 posts
    • fail_quail's system
      • Motherboard:
      • ASRock X570M Pro 4
      • CPU:
      • Ryzen 7 3700x
      • Memory:
      • 32GB DDR4 3200mhz
      • Storage:
      • 512GB NVME SSD x2, 320GB SATA SSHD, 4TB HDD
      • Graphics card(s):
      • AMD RX580 8GB
      • PSU:
      • Corsair 750W
      • Case:
      • corsair carbide 88r
      • Operating System:
      • Win10 x64
      • Monitor(s):
      • 24" Dell WFP 2408 + cheap 22" LG monitor
      • Internet:
      • 100meg Virgin cable

    Re: News - CUDA 5 - Kepler at its best

    Quote Originally Posted by CAT-THE-FIFTH View Post
    Potentially it looks good,but it does worry me at times, on whether AMD or Nvidia do make sure things look better with running stuff on the GPU as opposed to the CPU. Look at PhysX for example - Nvidia made sure it used inefficient x87 paths and was single threaded in the past when run on a Windows PC,even though there are paths which are more efficient. However,on consoles a much more efficient path is used.
    This is the main reason i refuse to buy nvidia equipment anymore.
    They saw an open hardware physx platform , brought it out, then went out of their way to make it proprietary*.

    So rather than the PC platform having a nice hardware physics system, they killed it off and reduced it to a minor eye-candy boost for one component vendor only. Now no game maker can use physx as an essential requirement as it only works at viable speeds on <50% of PCs now.

    *they've:
    -killed the dedicated hardware accelerator cards
    -disabled hardware physx if a competitor GPU is present
    -crippled CPU physx by actively making it as inefficient as possible (ancient inefficient cpu instructions, single-threaded code.)

    I hold Nvidia as solely responsible for killing PC hardware physics...
    Last edited by fail_quail; 27-10-2012 at 12:45 AM.

  11. #11
    Seriously casual gamer KeyboardDemon's Avatar
    Join Date
    Feb 2012
    Location
    London
    Posts
    3,013
    Thanks
    774
    Thanked
    280 times in 242 posts
    • KeyboardDemon's system
      • Motherboard:
      • Asus Sabretooth Z77
      • CPU:
      • i7 3770k + Corsair H80 (Refurbed)
      • Memory:
      • 16gb (4x4gb) Corsair Vengence Red (1866mhz) - (Because it looks good in a black mobo)
      • Storage:
      • Crucial M550 SSD 1TB + 2x 500GB Seagate HDDs
      • Graphics card(s):
      • EVGA GTX 980 SC ACX 2.0 (Warranty replacement for 780Ti SC ACX)
      • PSU:
      • EVGA 750 watt SuperNova G2
      • Case:
      • Silverstone RV03
      • Operating System:
      • Windows 10 Pro 64 Bit
      • Monitor(s):
      • Asus Swift PG278Q
      • Internet:
      • BT Infinity (40mbs dl/10mbs ul)

    Re: News - CUDA 5 - Kepler at its best

    Quote Originally Posted by fail_quail View Post
    -crippled CPU physx by actively making it as inefficient as possible (ancient inefficient cpu instructions, single-threaded code.)
    You've covered a lot about Physx that I didn't know but the one point I really want to know about is the one I've quoted here. Is it not possible for developers to get round the single thread limitation by locking a CPU core and dedicating it to Physx processing on a PC where it detects that there is no dedicated Physx (you can read that as nVidia card if you like) present? They did it in Borderlands 2 after all.

  12. #12
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,024
    Thanks
    1,871
    Thanked
    3,382 times in 2,719 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: News - CUDA 5 - Kepler at its best

    Quote Originally Posted by KeyboardDemon View Post
    You've covered a lot about Physx that I didn't know but the one point I really want to know about is the one I've quoted here. Is it not possible for developers to get round the single thread limitation by locking a CPU core and dedicating it to Physx processing on a PC where it detects that there is no dedicated Physx (you can read that as nVidia card if you like) present? They did it in Borderlands 2 after all.
    You're suggesting getting around a single thread limitation by locking it to a single core? That doesn't get around it at all, but it maximises your single-thread performance on OSen that don't handle threads jumping around or CPUs that have turbo modes. It doesn't address the limitation of keeping something that suits parallel processes running only on consecutive threads though. The kicker is the same middle ware for consoles doesn't have the same limitation, even if they're not running nVidia GPUs.

    The x87 claim holds a little less weight these days I think, however if it's still true for the wrong reasons then that further prevents parrellisation in the CPU.

  13. #13
    Seriously casual gamer KeyboardDemon's Avatar
    Join Date
    Feb 2012
    Location
    London
    Posts
    3,013
    Thanks
    774
    Thanked
    280 times in 242 posts
    • KeyboardDemon's system
      • Motherboard:
      • Asus Sabretooth Z77
      • CPU:
      • i7 3770k + Corsair H80 (Refurbed)
      • Memory:
      • 16gb (4x4gb) Corsair Vengence Red (1866mhz) - (Because it looks good in a black mobo)
      • Storage:
      • Crucial M550 SSD 1TB + 2x 500GB Seagate HDDs
      • Graphics card(s):
      • EVGA GTX 980 SC ACX 2.0 (Warranty replacement for 780Ti SC ACX)
      • PSU:
      • EVGA 750 watt SuperNova G2
      • Case:
      • Silverstone RV03
      • Operating System:
      • Windows 10 Pro 64 Bit
      • Monitor(s):
      • Asus Swift PG278Q
      • Internet:
      • BT Infinity (40mbs dl/10mbs ul)

    Re: News - CUDA 5 - Kepler at its best

    Quote Originally Posted by kalniel View Post
    You're suggesting getting around a single thread limitation by locking it to a single core? That doesn't get around it at all, but it maximises your single-thread performance on OSen that don't handle threads jumping around or CPUs that have turbo modes. It doesn't address the limitation of keeping something that suits parallel processes running only on consecutive threads though. The kicker is the same middle ware for consoles doesn't have the same limitation, even if they're not running nVidia GPUs.

    The x87 claim holds a little less weight these days I think, however if it's still true for the wrong reasons then that further prevents parrellisation in the CPU.
    Ok. I think I get your point more now. When you said they were crippling performance by making it inefficient you were saying that they were preventing parallel processing by forcing the CPU/CPU+GPU to try and deal with Physx on a single thread instead of making use of all available resources, essentially forcing the hardware to waste precious cycles instead of spreading the load over all available threads/logical cores. Is that right?

    Is there something like an idiots guide to understanding this sort of stuff more? I think I need it!

  14. #14
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,024
    Thanks
    1,871
    Thanked
    3,382 times in 2,719 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: News - CUDA 5 - Kepler at its best

    Close - there are several versions of PhysX. There is the GPU accelerated version, this runs on nVidia cards only, and is massively parallel so each compute unit in the GPU can run it at the same time. There is the console version which is also apparently multi-threaded, so can run on the multiple threads of console CPUs. Then finally there is the PC software version, for computers without nVidia GPUs or for use in physics not well suited to the GPU. This version is single threaded so cannot take advantage of more than one core. Because of that it is limited by a computers single-threaded performance, which is relatively poor compared to the potential performance using more than one core, especially as PCs have moved into multiple cores as a way of provide more power instead of faster single cores.

    When the physics takes a while to finish, it has the possibility of holding everything else up, because you might rely on it for hit-detection, or object location so you know where to draw things. In the end devs just disable most physics if they have to do it on the PC software version of physX, which is why all the effects shown off in PhysX demos are for nVidia cards only.

    On the other hand, there is no reason why physics middleware in software can't use multiple cores, so use something like Havok and it runs great on all PCs. The PhysX PC-only limitation is almost entirely arbitrary, because they want to sell you a GPU. Even more despicable is the disabling of GPU PhysX when an AMD card is detected, regardless of whether you have an nVidia card to run the PhysX on as well. In my opinion this is going even beyond positive nVidia card sales incentive and into negative anti-competitive practise.

  15. Received thanks from:

    KeyboardDemon (27-10-2012)

  16. #15
    Seriously casual gamer KeyboardDemon's Avatar
    Join Date
    Feb 2012
    Location
    London
    Posts
    3,013
    Thanks
    774
    Thanked
    280 times in 242 posts
    • KeyboardDemon's system
      • Motherboard:
      • Asus Sabretooth Z77
      • CPU:
      • i7 3770k + Corsair H80 (Refurbed)
      • Memory:
      • 16gb (4x4gb) Corsair Vengence Red (1866mhz) - (Because it looks good in a black mobo)
      • Storage:
      • Crucial M550 SSD 1TB + 2x 500GB Seagate HDDs
      • Graphics card(s):
      • EVGA GTX 980 SC ACX 2.0 (Warranty replacement for 780Ti SC ACX)
      • PSU:
      • EVGA 750 watt SuperNova G2
      • Case:
      • Silverstone RV03
      • Operating System:
      • Windows 10 Pro 64 Bit
      • Monitor(s):
      • Asus Swift PG278Q
      • Internet:
      • BT Infinity (40mbs dl/10mbs ul)

    Re: News - CUDA 5 - Kepler at its best

    Ok. I think I'm getting a better understanding of this now.

    So as I see it you are saying the bottom line is that nVidia are deliberately closing off technologies to competitors and forcing people to use nVidia's closed standards such as CUDA and PhysX instead of promoting more rapid development of graphics and GPGPU technologies. In which case I have to agree with you, that is despicable.

    On the other hand AMD/ATI have at least tried to stay at the high stakes table by introducing Graphics Core Next (GCN) which means that GPUs like the 7970 are equipped for general computing but they still can't do the physics processing because of the PhysX driver limitations that essentially choke the PC when an AMD card is detected.

    Havok have been around since 2000 and have had 150 titles launched and since 2005 there have been around 50 PhysX titles released, so logic says that PhysX isn't that big a deal, however I have been playing around with a GTX580 recently and my overall impressions are that I prefer the results from the 580 over my 6990 as I feel the overall quality is better this way. But I'm torn now, as like you, I don't like the way nVidia is behaving towards its competitors and how I feel like they are robbing me of a better gaming experience.

    I wonder how different things would have be if ATI had acquired AGEIA instead of nVidia?

  17. #16
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: News - CUDA 5 - Kepler at its best

    I saw this article on the AMD Gaming Evolved programme:

    http://techreport.com/review/23779/a...volved-program

    Katsman made it clear that his company has ongoing relationships with both AMD and Nvidia. The folks at Nixxes "always have a good time" working with both firms, he said, and with Human Revolution, Nixxes was "just as much in touch" with Nvidia as with AMD. Katsman pointed out that engaging both companies is necessary to ensure players get the best experience. Nobody wants their message boards flooded with bug reports and complaints, after all.

    Nevertheless, Nixxes seems to favor Gaming Evolved over Nvidia's developer program. According to Katsman, what AMD brings to the table is simply more compelling, and he views the AMD team as more dedicated. While he didn't delve too deeply into specifics, he mentioned that AMD engineers helped Nixxes implement not just Radeon-specific functionality in their games, but also MSAA antialiasing support and general DirectX 11 features. The two companies collaborate sometimes over Skype and sometimes in person, when AMD engineers visit the Nixxes offices in Utrecht, Holland.

  18. Received thanks from:

    KeyboardDemon (27-10-2012)

Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •