Read more.DLSS 2.0 trains using non-game-specific content. Feature already supported in 4 games.
Read more.DLSS 2.0 trains using non-game-specific content. Feature already supported in 4 games.
so Nvidia has been holding back performance or?!?.. somehow I doubt that they did not know this from the beginning :/
I've not managed to keep on top of tech news properly as I've not had proper access to a computer, but the AMD stuff is intrinsically linked to DirectX/Xbox, isn't it?
The XSX/AMD demos with AI reconstruction used dedicted 4 and 8 bit hardware for calculations.
https://www.eurogamer.net/articles/d...s-x-full-specs
Kalniel: "Nice review Tarinder - would it be possible to get a picture of the case when the components are installed (with the side off obviously)?"
CAT-THE-FIFTH: "The Antec 300 is a case which has an understated and clean appearance which many people like. Not everyone is into e-peen looking computers which look like a cross between the imagination of a hyperactive 10 year old and a Frog."
TKPeters: "Off to AVForum better Deal - £20+Vat for Free Shipping @ Scan"
for all intents it seems to be the same card minus some gays name on it and a shielded cover ? with OEM added to it - GoNz0.
so whats AMD answer to DLSS?
It's not about holding back performance, this is a new development that improves on what they previously released. It's like when AMD release a driver that improves performance, we don't suddenly claim they were holding back before.
It does at least confirm from the horses mouth that DLSS hampers image quality
Actually very different approaches. GPU scaling and RIS is literally that - res/image scaling and a sharpening features - you've been able to do this even without AMD's own tools for years.
DLSS is using ML to guess what the image should look like from a lower res source. Here is an older look MS' approach to 'DLSS' using AMD H/W - https://youtu.be/QjQm_wNrvVw?t=1477 - bear in mind this is from last GDC and so hardware predates even the 5700-series of GPUs.
Kalniel: "Nice review Tarinder - would it be possible to get a picture of the case when the components are installed (with the side off obviously)?"
CAT-THE-FIFTH: "The Antec 300 is a case which has an understated and clean appearance which many people like. Not everyone is into e-peen looking computers which look like a cross between the imagination of a hyperactive 10 year old and a Frog."
TKPeters: "Off to AVForum better Deal - £20+Vat for Free Shipping @ Scan"
for all intents it seems to be the same card minus some gays name on it and a shielded cover ? with OEM added to it - GoNz0.
Different approach but the same intention - upscale from a lower resolution image and sharpen. DLSS is one approach to upscale, AMD use another non-ML approach to upscale.
The original machine learning approach has potential to work better than the AMD approach as it can be optimised better per game,but the problem you need to the train the network sufficiently to a good degree of confidence...which takes time. So the problem is if it takes months after a game has launched to get the best iteration of the upscaling algorithm,its not going to be ideal.
Also the whole "trained using non-game specific content" is more of a kludge really,as it is saying its moving towards a more general purpose upscaling algorithm,which to a degree is what existing upscaling algorithms are,ie,not specific to any one game.
AMD will use INT 16, 8, 4 of their shaders for ML workloads unlike Nvidia who use dedicated Tensor cores. In other words AMD does not brag of Tensor cores.
True, though I think the potential is in finding sampling patterns that are different to human-designed ones. It could be that (over-simplifying) for a scene that has a lot of sky then the sky bits use some specific upscaler, while ground textures use another - or more likely the ML comes up with some strange correlation like there was a duck in the corner.
Well we don't know whether previous upscaling methods used on consoles used machine learning to optimise the general purpose upscaling algorithms - Microsoft does have a lot of investments in this area too. From what I also gatherng is DLSS2.0 is using a lot of sharpening to make the image look "better"which sounds a bit like AMD is doing too,but in more general way.
The result of the learning is a computationally expensive algorithm that would be too expensive to run without specialised hardware (the tensor cores). That much more complex algorithm can just do a better job then the simpler ones available to gpu's without specialised AI processing.
There are currently 1 users browsing this thread. (0 members and 1 guests)