Read more.Old instruction set could be to blame for poor performance.
Read more.Old instruction set could be to blame for poor performance.
I read this report yesterday and while it does seem fairly typical nVidia behaviour, calling it 'deliberately hobbled' is a bit of a stretch - 'deliberately not optimised' is more accurate, and a subtle but important difference!
Still don't like it at all - especially when they do optimise the CPU PhysX for consoles. But it's a clear chance for Havok or openCL to provide better performance.
I agree with Kalniel, although you could probably argue it was hobbled to a certain extent by the choice to deliberately use the slower instruction set of the two that were available when they developed it (given SSE has been around for about a decade).
As for not making it run on multiple cores, well that's definitely the "not optimising it" argument rather than hobbling it".
What's the history of the code behind Ageia physics? When did they first start developing the code? Likely before the PPU was architected, and before GPGPU would have allowed it to run on a graphics card. So maybe the original codebase is old enough that x87 was used, and then a poor development plan (or a cunning one) lead to the CPU version falling behind technologically. But still, SSE2 came around in 2001... long time ago!
Also you can have your compiler spit out SSE2 code for fp arithmetic - surely they're not writing in assembly?
Does PhysX notability improve any games? Being someone who normally goes the ATI route. PhysX is something I have always ignored and not seemed a problem.
(\__/) All I wanted in the end was world domination and a whole lot of money to spend. - NMA
(='.*=)
(")_(*)
It's just cheaper and less effort to code for x87. Neither Ageia nor nVidia have reason to SIMD optimise for SSE when all they're interested in is selling you an AIB. It's one of the less pleasant side effects of commercial software, commercial interests are the primary concern.
I'm certain I read this elsewhere last year, this is old news. Still, it's disappointing (but not in the least bit surprising) that nothing has changed in that time.
It's one of the physics APIs available - that means it devs can just make certain calls and the middleware will handle all sorts of other stuff that would otherwise require a lot of coding. So there's a number of RPGs for instance that have objects that can be picked up and moved around etc, and havok or physX means it just works, letting the devs concentrate more on other things. So yes, I'd say physics middleware certainly improves games.
However PhysX over any of the others? Not in my experience, in fact games that use Havok seem to use physics better imho.
you can see a difference in Batman DA, but to be honest not one that makes it really worth it.
im pretty sure someone will probs just recode one day.....
It's not really physics, it's just used as a means to get better visual effects, surprisingly PhysX is mainly used for graphics these days, due to the use of particulate effects.
No really suprising. For Nvidia to recode would be comercial suicide so unless some tallented coder outside of nvidia is going to do it its never going to happen
Yet another reason why developers need to use open physics and let physx die the death it needs.
There are currently 1 users browsing this thread. (0 members and 1 guests)