Re: Nvidia touts big advances in its DLSS 2.0 technology
Quote:
so whats AMD answer to DLSS?
Quote:
Originally Posted by
albert89
It used to be software simulated RT (probably currently), which does not affect fps. But that's changing with the implementation of hardware based RT. However it would be handy, if they could fix their driver issues. And that's coming from a fanboy.
I don't understand your comment. What does AMDs software (or hardware) simulated ray tracing have to do with upscaling?
Quote:
Originally Posted by
CAT-THE-FIFTH
Well we don't know whether previous upscaling methods used on consoles used machine learning to optimise the general purpose upscaling algorithms - Microsoft does have a lot of investments in this area too. From what I also gatherng is DLSS2.0 is using a lot of sharpening to make the image look "better"which sounds a bit like AMD is doing too,but in more general way.
Yep they could have done some optimisation for the general purpose scalars, but you don't get the case-specific algorithm choice (if this looks like sky do X, if this looks like ground do Y etc.). DLSS2 is moving from: "in game A find bits that look like sky and do X" to more general "find bits that generally look like sky and do X".
From the screenshots it looks like they've added a 'if this looks like fencing' type since DLSS1 did terribly there before - and that wasn't recoverable with just more sharpening.
But it strikes me as a hell of a lot of engineering effort for the return - variable rate shading tackles the same sort of problem but puts the power in the devs hands to determine which parts of a scene can be lower fidelity without impacting the experience. Maybe the two will be combined to automatically lower fidelity on unnoticable parts of the scene.
Re: Nvidia touts big advances in its DLSS 2.0 technology
Quote:
Originally Posted by
kalniel
Yep they could have done some optimisation for the general purpose scalars, but you don't get the case-specific algorithm choice (if this looks like sky do X, if this looks like ground do Y etc.). DLSS2 is moving from: "in game A find bits that look like sky and do X" to more general "find bits that generally look like sky and do X".
From the screenshots it looks like they've added a 'if this looks like fencing' type since DLSS1 did terribly there before - and that wasn't recoverable with just more sharpening.
But it strikes me as a hell of a lot of engineering effort for the return - variable rate shading tackles the same sort of problem but puts the power in the devs hands to determine which parts of a scene can be lower fidelity without impacting the experience. Maybe the two will be combined to automatically lower fidelity on unnoticable parts of the scene.
The problem is it is still leading to a "one size fits all" instead of the much touted "best for each game" approach,so it might be more refined but practically the latter approach is too time consuming I expect. Also you might have to consider Turing was as more of a product made for professional VFX markets and commercial machine learning,as gaming too. Nvidia is probably trying to find ways of using its leveraged silicon in different ways,especially as RT performance isn't really strong enough.
Edit!!
Also another thing I touched on before - lots of the DLSS2.0 comparisons with native images need to be examined closely,as I have seen instances of FXAA being applied to the native images,making them look softer,and the extra sharpening on DLSS2.0 images making them look better on a first glance.