Page 2 of 2 FirstFirst 12
Results 17 to 18 of 18

Thread: Nvidia touts big advances in its DLSS 2.0 technology

  1. #17
    Banhammer in peace PeterB kalniel's Avatar
    Join Date
    Aug 2005
    Posts
    31,025
    Thanks
    1,871
    Thanked
    3,383 times in 2,720 posts
    • kalniel's system
      • Motherboard:
      • Gigabyte Z390 Aorus Ultra
      • CPU:
      • Intel i9 9900k
      • Memory:
      • 32GB DDR4 3200 CL16
      • Storage:
      • 1TB Samsung 970Evo+ NVMe
      • Graphics card(s):
      • nVidia GTX 1060 6GB
      • PSU:
      • Seasonic 600W
      • Case:
      • Cooler Master HAF 912
      • Operating System:
      • Win 10 Pro x64
      • Monitor(s):
      • Dell S2721DGF
      • Internet:
      • rubbish

    Re: Nvidia touts big advances in its DLSS 2.0 technology

    so whats AMD answer to DLSS?
    Quote Originally Posted by albert89 View Post
    It used to be software simulated RT (probably currently), which does not affect fps. But that's changing with the implementation of hardware based RT. However it would be handy, if they could fix their driver issues. And that's coming from a fanboy.
    I don't understand your comment. What does AMDs software (or hardware) simulated ray tracing have to do with upscaling?

    Quote Originally Posted by CAT-THE-FIFTH View Post
    Well we don't know whether previous upscaling methods used on consoles used machine learning to optimise the general purpose upscaling algorithms - Microsoft does have a lot of investments in this area too. From what I also gatherng is DLSS2.0 is using a lot of sharpening to make the image look "better"which sounds a bit like AMD is doing too,but in more general way.
    Yep they could have done some optimisation for the general purpose scalars, but you don't get the case-specific algorithm choice (if this looks like sky do X, if this looks like ground do Y etc.). DLSS2 is moving from: "in game A find bits that look like sky and do X" to more general "find bits that generally look like sky and do X".

    From the screenshots it looks like they've added a 'if this looks like fencing' type since DLSS1 did terribly there before - and that wasn't recoverable with just more sharpening.

    But it strikes me as a hell of a lot of engineering effort for the return - variable rate shading tackles the same sort of problem but puts the power in the devs hands to determine which parts of a scene can be lower fidelity without impacting the experience. Maybe the two will be combined to automatically lower fidelity on unnoticable parts of the scene.

  2. #18
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Nvidia touts big advances in its DLSS 2.0 technology

    Quote Originally Posted by kalniel View Post
    Yep they could have done some optimisation for the general purpose scalars, but you don't get the case-specific algorithm choice (if this looks like sky do X, if this looks like ground do Y etc.). DLSS2 is moving from: "in game A find bits that look like sky and do X" to more general "find bits that generally look like sky and do X".

    From the screenshots it looks like they've added a 'if this looks like fencing' type since DLSS1 did terribly there before - and that wasn't recoverable with just more sharpening.

    But it strikes me as a hell of a lot of engineering effort for the return - variable rate shading tackles the same sort of problem but puts the power in the devs hands to determine which parts of a scene can be lower fidelity without impacting the experience. Maybe the two will be combined to automatically lower fidelity on unnoticable parts of the scene.
    The problem is it is still leading to a "one size fits all" instead of the much touted "best for each game" approach,so it might be more refined but practically the latter approach is too time consuming I expect. Also you might have to consider Turing was as more of a product made for professional VFX markets and commercial machine learning,as gaming too. Nvidia is probably trying to find ways of using its leveraged silicon in different ways,especially as RT performance isn't really strong enough.

    Edit!!

    Also another thing I touched on before - lots of the DLSS2.0 comparisons with native images need to be examined closely,as I have seen instances of FXAA being applied to the native images,making them look softer,and the extra sharpening on DLSS2.0 images making them look better on a first glance.

  3. Received thanks from:

    007 (06-04-2020)

Page 2 of 2 FirstFirst 12

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •