Read more.And devs can now access the XeSS DevMesh program to test it in their games.
Read more.And devs can now access the XeSS DevMesh program to test it in their games.
All looks very nice, I'm not in desperate need for a new card but its going to be down to price and availabilty. If they can get a competitive card, priced decently and has good availabilty, it looks like a good option.
Having a look at the video, and thinking that I don't really like the lighting changes in Metro Exodus when they turned on RT.
With over 100,000 employees Intel is a formidable giant that can pull off something new EXTREMELY fast.
I think Intel ( too ) are probably a very top down company, so it can be very slow to do anything fast.
Turning a big company ---- turning a government,,,,,, one word,,,,, supertanker.
Do not discount the possibility the the next Xbox is running on Intel. I wouldn't even discount the possibility that the Series X+/PS5 pro (or whatever they call them) has Intel inside. If there's a big enough step up from the existing devices, any kind of emualation layer will be moot (and probably not that relevent on the XBox running DirectX).
All these superscaling (is that the right term?) technologies mean nothing until they are available through directx (or vulkan). Once they are devs can just turn it on and let AMD/Intel/Nvidia do the best job implementing it, rather than have to wait for a dev to implement per provider implementations. Lets face it devs are going to either 1) Implement nothing, 2) Implement just one (and if nvidia its nvisia specific so useless for AMD/Intel, if not I'm sure all the 30X0 owners will be unhappy their tensors cores are unused!).
That's not how these work unfortunately - they're not something a dev can just turn on, because they're game engine specific - each game engine works in quite different ways so the people who make the game engines are the ones who need to expose the motion vector etc. data that the model needs. If you could persuade devs to use standardised game engines then it's more likely it would become a just turn on and leave implementation to the gfx vendors.
What DX standardisation should eventually bring is a level of support for different methods on the consumer end - ie rather than knowing I need to have x model of geforce card to support it, it becomes a DX feature level. But that doesn't affect the large bulk of the developer work.
I see what you are saying but for AMDs implementation because its not AI driven all you need to do is tell the driver what bits to scale and what not to (UI) - Its why AMD's system is supposed to take hours at most to implement. If direct X supported that it would presumably be very trivial for any dev to just mark the different bits and done. The problem is you then lose the benefits of any AI driven enhancement (but maybe the DX support can be made flexible?). I still think having these competing implementations is bad - How many games am I going to be unable to use FSR on my AMD card when the dev has implemented nvidia's vendor locked version (and probably knowing nvidia paid for it so FSR will never happen). Its like nvidia Gameworks/PhysX all over again and it sucks.
Even the simple FSR isn't the sort of thing you include in direct X - we've had anti-aliasing for decades, but can you think of a point when it was ever included in DX? At best you have anti-aliasing methods that may take advantage of underlying DX features, but usually they're completely separate.
There are currently 1 users browsing this thread. (0 members and 1 guests)