Read more.Meanwhile, AMD pushes forward with the FEMFX highly-threaded library for CPU physics.
Read more.Meanwhile, AMD pushes forward with the FEMFX highly-threaded library for CPU physics.
If I was the only CPU vendor offering up to 32 threads on a mainstream platform I'd be pushing for highly-threaded CPU physics too...
And of course their highly-efficient SMT implementation gives them an advantage in threads right down the product stack. Plus vendor agnostic. And open. As usual.
I'm happy to admit to being a bit of an AMD fanboy, but I'm one for actual reasons.
I'm happy to admit to being a bit of an AMD fanboy, but I'm one for actual reasons.[/QUOTE]
Me too, all the way back to them stealing the lead with Athlon 64 and going multi-core in a bigger way than Intel all those years ago.
Whilst I have been back to Intel in more recent times, my next build is definitely going to be a 3rd gen Ryzen.
Got to be honest... I kind of prefer the approach AMD are taking, it's 'non proprietary' unlike physx, so you could technically use it with intel as well as amd cpu's, it will also work with previous gen hardware and newer stuff that isn't out yet.
AMD demo is way more interesting.
nVidia demo probably rushed because AMD released theirs...
If it is NVIDIA than there will allways be a trigger inside their code to hamper performence on everything else except NVIDIA and thats why I hate them to the core. Develope quality and inovate but dont make the rival look bad while you triger the software to make things wors for the competing rival. Nvidia=ChildLogicUnit
Wasn't at all impressed by that demo video.
The demo video was definitely boring compared to AMD's, kinda feel since Nvidia already had the GTC 2019 conference scheduled it's AMD who jumped in and stole the thunder.
NVIDIA Had so much time to create a meaningful PhysX implementation... But really, AMD is the only company in this market that actually cares about the user experience. I mean, PhysX is GREAT! I've always had NVIDIA cards and I'm glad that happened as I would have missed PhysX but still it was underutilized to say the least simply because it was proprietary. Now AMD is just turning another NVIDIA Tech redundant and meaningless. Cuz that's what happens and NVIDIA simply can't learn anything.
4 years already.... blimey.
The thing is Nvidia will be pushing physx to be used on their tesla/quadro cards in business and ideally geforce in games so I wouldn't necessarily say that the code for the cpu side will be as well optimised.
Don't get me wrong, I basically have to buy nvidia anyway (cuda) so both are viable options to me but I personally prefer a more standards approach to things and ultimately amd's will work on intel, amd or any other x86/x64 cpu with the prerequisite features.
Other than the Arkham games, I don't think I've ever had the need to use PhysX. It was a pretty gimmick but unless Nvidia finds a way to have it implemented in every game and with superb optimization, it's going to stay one.
Whilst I have been back to Intel in more recent times, my next build is definitely going to be a 3rd gen Ryzen.
Never understood Physx myself and the option of offloading it to the CPU seemed like a downgrade.
There are currently 1 users browsing this thread. (0 members and 1 guests)