Read more.Good news for owners of NVIDIA's GeForce 8 series graphics cards. Following the recent acquisition of Ageia, NVIDIA's CEO has let it be known that PhysX technology will be added to existing GeForce cards via a simple software update.
Read more.Good news for owners of NVIDIA's GeForce 8 series graphics cards. Following the recent acquisition of Ageia, NVIDIA's CEO has let it be known that PhysX technology will be added to existing GeForce cards via a simple software update.
Could be handy, but won't it impact on the frame rate if part of the card is busy doing PhysX calculations and lead to worse, rather than better performance on nVidia cards?
Or is this just a ploy to tap into the unused potential of SLI to boost the sales of their motherboards and cards...
(\___/) (\___/) (\___/) (\___/) (\___/) (\___/) (\___/)
(='.'=) (='.'=) (='.'=) (='.'=) (='.'=) (='.'=) (='.'=)
(")_(") (")_(") (")_(") (")_(") (")_(") (")_(") (")_(")
This is bunny and friends. He is fed up waiting for everyone to help him out, and decided to help himself instead!
Yes, to both of those I think Lucio.
We'll have to see which a game uses most - CPU cores or GPU power. If GPU (like most games at the moment) then doing physics on unused cores seems a better way of doing things (and Intel will be pushing this with it's acquisition of Havok).
I have never liked the idea of giving up Graphics power for physics. I was under the
impression that extra physics puts your GPU under more pressure from all the extra
bits and pieces it needs to render etc.
How much of a trade off is there between running a dedicated PPU chip compared to
it being emulated by the GPU. If there is a significant difference then I would prefer
the PPU integrated onto the GPU die. That way the power of the PPU could be
matched with that of the GPU.
In any case physics will only be properly viable when its a part of directX
When this was announced I was expecting that to be the case too. I certainly wasn't expecting it to just be a software thing. If that's the case what little impression of PPUs I had has been lowered further - if a GPU can do it alongside it's normal graphics processing then it can't be taking that much of a hit, so did the PPU do much at all? Was it just a marketing gimmick for what was primarily a software layer?
doesn't this make a mockery of PhysX?
I mean....they told us that it ADDED to the gaming experience by doing more...now it turns out the card does it anyway.
I'm baffled.
Originally Posted by Advice Trinity by Knoxville
The key thing is they've ported physX to CUDA, which is nvidia's way of leveraging the GPU power. Ageia said all along that you graphics cards were somewhat suited to these calculations (more than x86 CPU say), just not as suited to it as a dedicated unit. That'll still be the line I'd guess.
If (haha) ATi can implement CUDA on their GPUs then it's win win. But more likely we will see it as part of dx12 or whatever when microsoft wakes up and decides to knock other companies' propriatory systems out of the way again.
There are currently 1 users browsing this thread. (0 members and 1 guests)