Read more.AMD wants 'FSR' to be open, cross-platform, easy for devs to use, and as impactful as DLSS.
Read more.AMD wants 'FSR' to be open, cross-platform, easy for devs to use, and as impactful as DLSS.
The aim is not to have every time more processing power but better visual of the game, good amd is making it. Will be waiting 2022, guess new gens cards will be there too.
Nvidia can *sometimes* get away with the whole "Use our trademarked term as an open standard" rubbish because of their market dominance and they're better at abusing their position and being assholes to their partners & customers. AMD, you are not Nvidia and frankly I don't think you have it in you.
Even if its a genuine attempt to create something truly open it won't survive Nvidia's manipulations & scheming to undermine and kill it. FSR will become your next Freesync.
Gentle Viking (19-03-2021)
Well I will welcome the AMD brand even more withopen arms...
the 3090 now cost up to 20.000 where I live thats around 2000£ give or take.... buying a 6900XT for half the price seem like a bargain... and I am not a miner so uh... goodbye NVIDIA, your card price vs performance = not good enough.
"we are evaluating many different options" means we don't know how we are going to do it.
"We want to bring it to market this year. We believe we can do it this year, but at the same time we still have a lot of work to do" means it's not coming out this year.
The reality is just using normal gpu cores you have the same tech that's existed for the last 10 years (shaders basically) and a thousand times more work then AMD will ever manage has already been done on upscaling with them. It's pretty hard to magic something new.
Cross platform features and technology usually work better when they're open source (for example AMD's TressFX works better than Nvidia Hairworks... and then there's Freesync and other stuff too).
That's because anyone can supply suggestions/changes to the underlying code and optimize it for the hw in question... in a closed-source environment, only the group which created it has access to it and they need to work with devs to optimize for it.
Consoles and PC's don't quite operate the same way as they are mostly their own closed systems of sort (or at least its a lot easier to make a game for 1 console than to work on a wide range of hw)... furthermore, while consoles DO use Zen 2 and RDNA 2, the way these chips are designed is different compared to PC hw... not to mention the fact that consoles OS is different.
Although... NV does have deep pockets and a history of 'incentivizing' devs to use proprietary features.
Oh, those things that are really good at parallel data processing to accelerate graphics amongst other things?
It isn't really something new though, it's rendering at a lower resolution for speed and then upscaling the output at a very basic level. Remember this has benefits for not only for ray tracing but also for normal games. Where it becomes more complex is when it's used in games that players would like to turn on ray tracing, because those calculations have to be done on the GPU, which I'd imagine isn't a small overhead.
Upscaling isn't anything new. 'Intelligently' generating new content to fill in the upscaling however is very new, and that's effectively what the ML based methods are doing. Ray-tracing doesn't have any effect on the overhead of ML upscaling, the reasons it's tied in with ray-tracing is because ray-tracing is massively resolution dependant for the render speeds, so anything that allows you to lower resolution nets you massive gains when ray-tracing - far more than when using rasterisation. So ML upscaling is a really good fit with ray-tracing for that reason. On Nvidia cards the parts of the GPU that do ML upscaling are completely different from that parts that do ray-tracing, so there's no resource competing. AMD haven't revealed how theirs is going to work yet though.
Yes, exactly what we have been doing for the last 10 years. There has always been lots of effort put into upscaling in particular with consoles as they pretty well do it all the time. Without new hardware (i.e. something other then shaders, for example the AI tensor cores Nvidia added) it's very hard to see what AMD can do with a few devs in a year that hasn't already been tried by 100's of devs over 10 years in their quest to maximise console image quality.
If I were AMD I'd look to add the AI cores to my next gen gpu's, and I'd be spending this year attempting to develop my DLSS equivalent software for them so it's somewhat ready when the next gen gpu's get released. In the mean time I'd get my marketing team to fob people off with vague talk while knowing that RNDA 2 is never gonna support it. It's not like many people will care as hardly anyone has been able to buy RNDA 2 anyway.
OFC the job the tensor cores is doing can be done by shaders, it just takes a bit more silicon to do it as the shaders aren't specialized to the task but as an upside you get more shaders for when you don't want to run deep learning tasks.
It could be that the huge Infinity Cache might speed that task by keeping all the coefficient tables in cache and only having to stream the data, or by tile rendering to keep the whole region cached so the upscale pass has minimal impact. Perhaps the cache is the new hardware.
There are currently 1 users browsing this thread. (0 members and 1 guests)