Read more.Software catches-up with the hardware.
Read more.Software catches-up with the hardware.
Poor Hexus. We already knew all this back when Kepler was released. For eg, this article back in may:
http://www.theregister.co.uk/2012/05...led/page2.html
Originally Posted by theregister
I didn't think the next set of consoles had nVidia chips in, does Hexus know something we don't?Originally Posted by hexus
If you're serious about PC gaming then these features shouldn't excite you, only give cause to worry that the already small market is going to be fragmented even further. We need work on cross-vendor features like openCL instead.
And highlighting old features just after an AMD driver release (I don't recall Hexus doing the reverse after nVidia driver release news) smacks of influence in Hexus' decision making. Come on, you're better than that.
Last edited by kalniel; 24-10-2012 at 11:40 AM.
I don't agree with your thinking on this one, Kalniel.
CUDA 5 was officially launched last week so it makes sense to cover it some way.
Also, NVIDIA rolled out new beta GeForce drivers yesterday that offer up to 15 per cent extra performance, though the majority of gains are sub-five per cent.
http://www.geforce.com/whats-new/articles/nvidia-geforce-310-33-beta-drivers-released/
We took a good look at the improvements and decided that, on balance, the gains weren't significant enough for a full-on analysis, per Catalyst 12.11.
Take a look at the amount of AMD vs. NVIDIA coverage in the last few weeks, too.
Thanks for the reply Tarinder. I was surprised there was no coverage of the nVidia drivers - that would have been more consistent given past coverage and appropriate IMHO.
On the other hand you make the point in the article that it's only fair to cover CUDA 5 given the AMD driver news. It doesn't make sense to me to only cover it now and in some way as a response the the driver news.
Nor does it make sense to me to claim that you'll see this in new gaming consoles or imply that "It's highly expected that all next-gen AAA game engines will utilise this form of acceleration in one way or another.". it's only if you carefully look at the exact wording that you see you might talking about general GPU compute rather than CUDA, when the whole tone of the article is closely tied into nVidia's kepler.
I bought an example of this a while ago. There was a special offer on Just Cause 2 on the PC (very, very cheap!) so given I loved that game on the XBox, I bought it for my PC. I was pretty unhappy then when I was unable to get anything like a decent frame rate without resorting to sub-console graphics levels.There's clear potential for accelerating scientific simulations, media encode/decode but also, games, where sometimes a custom algorithm is needed to provide a new visual effect or simulation of weather systems, which require massive parallelism.
However, when I tried the option to "run the water simulation on the GPU instead of CPU" (not the exact description, but close enough) the difference was staggering. Not only were the graphics now very smooth, but I was also able to ramp up the resolution to 1080p and hit the high AA and AF settings.
I know that AMD Phenom II's aren't exactly powerhouses these days, but I didn't expect that moving one game aspect from CPU to my (now elderly) GF460 would make such a marked difference.
My point being that although CUDA and OpenCL don't seem relevant to gamers at the moment (as the article says) they might well be increasingly so as GPU's get more and more powerful. Personally though I think it's a shame that NVidia decided to do their own thing rather than get behind OpenCL - fragmentation = bad!
Potentially it looks good,but it does worry me at times, on whether AMD or Nvidia do make sure things look better with running stuff on the GPU as opposed to the CPU. Look at PhysX for example - Nvidia made sure it used inefficient x87 paths and was single threaded in the past when run on a Windows PC,even though there are paths which are more efficient. However,on consoles a much more efficient path is used.
Last edited by CAT-THE-FIFTH; 24-10-2012 at 07:06 PM.
Now since these drivers are out maybe we can get some decent GPGPU results from our kepler cards, I just wonder how long it will take programs like creative suite to adapt and since the drivers are officially out there should be some improvements so maybe you could do what you did for catalyst 12.8 with CUDA 5 but instead of games you could do Open CL and CUDA programs.
In context with the paragraph containing the AAA statement, I'm happy that it's referring to gpgpu compute in general, though, it's fairly likely that engines such as Unreal 4 will directly utilise CUDA (in fact I know the CryENGINE team has been hiring in this dept. for a while now), even if not for a console and so, any ambiguity isn't completely unfounded. Having said this I've now clarified this point again at the end of the article.
Indeed, we knew well of the dynamic parallelism feature when the card was first launched, however CUDA 5 is the first official release to support this hardware capability, which was the primary basis for the article subtitle: "Software catches-up with the hardware.".
I hope this clears up the intent of the article. As to why now? CUDA 5 is less than a week old and was perhaps the most significant step forward on NVIDIA's part in the same time-frame as AMD's driver release.
Certainly be nice to see some GPU upgrades without having to pay anything
Current specs:
CPU: Intel i5 3570k Overclocked @ 4.6Ghz GPU: MSI Twin Frozr 7850 @ 1000Mhz Cooler: Arctic Cooling Freezer 13 RAM: 16Gb Corsair Vengeance 1600Mhz
Motherboard: Gigabyte GA Z77X-D3H
This is the main reason i refuse to buy nvidia equipment anymore.
They saw an open hardware physx platform , brought it out, then went out of their way to make it proprietary*.
So rather than the PC platform having a nice hardware physics system, they killed it off and reduced it to a minor eye-candy boost for one component vendor only. Now no game maker can use physx as an essential requirement as it only works at viable speeds on <50% of PCs now.
*they've:
-killed the dedicated hardware accelerator cards
-disabled hardware physx if a competitor GPU is present
-crippled CPU physx by actively making it as inefficient as possible (ancient inefficient cpu instructions, single-threaded code.)
I hold Nvidia as solely responsible for killing PC hardware physics...
Last edited by fail_quail; 27-10-2012 at 12:45 AM.
You've covered a lot about Physx that I didn't know but the one point I really want to know about is the one I've quoted here. Is it not possible for developers to get round the single thread limitation by locking a CPU core and dedicating it to Physx processing on a PC where it detects that there is no dedicated Physx (you can read that as nVidia card if you like) present? They did it in Borderlands 2 after all.
You're suggesting getting around a single thread limitation by locking it to a single core? That doesn't get around it at all, but it maximises your single-thread performance on OSen that don't handle threads jumping around or CPUs that have turbo modes. It doesn't address the limitation of keeping something that suits parallel processes running only on consecutive threads though. The kicker is the same middle ware for consoles doesn't have the same limitation, even if they're not running nVidia GPUs.
The x87 claim holds a little less weight these days I think, however if it's still true for the wrong reasons then that further prevents parrellisation in the CPU.
Ok. I think I get your point more now. When you said they were crippling performance by making it inefficient you were saying that they were preventing parallel processing by forcing the CPU/CPU+GPU to try and deal with Physx on a single thread instead of making use of all available resources, essentially forcing the hardware to waste precious cycles instead of spreading the load over all available threads/logical cores. Is that right?
Is there something like an idiots guide to understanding this sort of stuff more? I think I need it!
Close - there are several versions of PhysX. There is the GPU accelerated version, this runs on nVidia cards only, and is massively parallel so each compute unit in the GPU can run it at the same time. There is the console version which is also apparently multi-threaded, so can run on the multiple threads of console CPUs. Then finally there is the PC software version, for computers without nVidia GPUs or for use in physics not well suited to the GPU. This version is single threaded so cannot take advantage of more than one core. Because of that it is limited by a computers single-threaded performance, which is relatively poor compared to the potential performance using more than one core, especially as PCs have moved into multiple cores as a way of provide more power instead of faster single cores.
When the physics takes a while to finish, it has the possibility of holding everything else up, because you might rely on it for hit-detection, or object location so you know where to draw things. In the end devs just disable most physics if they have to do it on the PC software version of physX, which is why all the effects shown off in PhysX demos are for nVidia cards only.
On the other hand, there is no reason why physics middleware in software can't use multiple cores, so use something like Havok and it runs great on all PCs. The PhysX PC-only limitation is almost entirely arbitrary, because they want to sell you a GPU. Even more despicable is the disabling of GPU PhysX when an AMD card is detected, regardless of whether you have an nVidia card to run the PhysX on as well. In my opinion this is going even beyond positive nVidia card sales incentive and into negative anti-competitive practise.
KeyboardDemon (27-10-2012)
Ok. I think I'm getting a better understanding of this now.
So as I see it you are saying the bottom line is that nVidia are deliberately closing off technologies to competitors and forcing people to use nVidia's closed standards such as CUDA and PhysX instead of promoting more rapid development of graphics and GPGPU technologies. In which case I have to agree with you, that is despicable.
On the other hand AMD/ATI have at least tried to stay at the high stakes table by introducing Graphics Core Next (GCN) which means that GPUs like the 7970 are equipped for general computing but they still can't do the physics processing because of the PhysX driver limitations that essentially choke the PC when an AMD card is detected.
Havok have been around since 2000 and have had 150 titles launched and since 2005 there have been around 50 PhysX titles released, so logic says that PhysX isn't that big a deal, however I have been playing around with a GTX580 recently and my overall impressions are that I prefer the results from the 580 over my 6990 as I feel the overall quality is better this way. But I'm torn now, as like you, I don't like the way nVidia is behaving towards its competitors and how I feel like they are robbing me of a better gaming experience.
I wonder how different things would have be if ATI had acquired AGEIA instead of nVidia?
I saw this article on the AMD Gaming Evolved programme:
http://techreport.com/review/23779/a...volved-program
Katsman made it clear that his company has ongoing relationships with both AMD and Nvidia. The folks at Nixxes "always have a good time" working with both firms, he said, and with Human Revolution, Nixxes was "just as much in touch" with Nvidia as with AMD. Katsman pointed out that engaging both companies is necessary to ensure players get the best experience. Nobody wants their message boards flooded with bug reports and complaints, after all.
Nevertheless, Nixxes seems to favor Gaming Evolved over Nvidia's developer program. According to Katsman, what AMD brings to the table is simply more compelling, and he views the AMD team as more dedicated. While he didn't delve too deeply into specifics, he mentioned that AMD engineers helped Nixxes implement not just Radeon-specific functionality in their games, but also MSAA antialiasing support and general DirectX 11 features. The two companies collaborate sometimes over Skype and sometimes in person, when AMD engineers visit the Nixxes offices in Utrecht, Holland.
KeyboardDemon (27-10-2012)
There are currently 1 users browsing this thread. (0 members and 1 guests)