Read more.And we are reminded that Battlefield V now has DLSS, for a 40pc performance boost.
Read more.And we are reminded that Battlefield V now has DLSS, for a 40pc performance boost.
It's great to see DLSS being used as it almost makes up for the performance hit that comes from enabling RTX, generally they've got it down to single figures when enabling both features, it's almost enough to persuade me that Nvidia isn't trying to flog a dead horse. Unfortunately in typical Nvidia fashion they go and spoil it all by trying to artificially segment the market.
At this rate people are just going to see that a game supports RTX & DLSS and avoid them like the plague.Originally Posted by TechPowerUp
Well, I can tell you that I'm sorely sorely disappointed by today's implementation of DLSS into Battlefield V.
3440 x 1440 isn't supported, despite the monitor in question featuring an NVIDIA logo. So my £1200 graphics card and £700 monitor can't run DLSS and therefore suffers terrible framerates with RTX enabled.
Likely going to be the same story with Metro and every other game that will support DLSS and RTX.
Raging, angry, upset, all kinds of emotions. About two minutes from putting the whole lot up on ebay and giving up on gaming.
I'm only guessing but that's probably because it's a non-standard resolution, by the looks of things DLSS has to be 'trained' for each situation separately, for example it's not supported in Exodus @1080p without RTX being enabled because they've probably only 'trained' the AI at the resolution with RTX on.
IDK how far that goes but based on what TPU says I'm guessing the AI has to be 'trained' for each resolution and every instance of RTX for each card, maybe they'll patch the game or drivers with 3440x1440 at a latter date.
EDIT: Just thinking about it: If it's for each card, each resolution, and for RTX on/off, that's a lot of 'training' that needs to be done. 5x cards, 3 main resolutions, and two states of RTX, that's 30 AI algorithms that need 'training' just for the 3 main resolutions.
Last edited by Corky34; 13-02-2019 at 08:03 PM.
It's quite likely there will be repetitive patterns when there is enough data, so gradually I'd expect the need to train every combination separately won't be necessary.
Otoh nV really needs to speed up training and distribution of created models if it is supposed to turn into an accepted option.
Yeah, I'm fully aware of the reasons why they've only chose to support three resolutions, but to put your name on a monitor, and support four resolutions under the G-SYNC umbrella (1920 x 1080, 2560 x 1440, 3440 x 1440 and 3840 x 2160) but only actually use three of them for DLSS seems a little unfair to me. I've paid just as much money for this monitor and kind of went into this with an expectation, due to the G-SYNC badge, of some kind of support.
Ultrawides are just as common as UHD monitors according to the Steam Hardware Survey of January, so why aren't they supporting them?
Also, it is essentially just the same as 1440p but with more width, I fail to see what difference it makes. It's literally just extra pixels at the sides.
I've put a few hours in to BFV tonight to test the DLSS performance, and I have to say I am pretty impressed so far. Note that I have seen performance vary between patches, so the numbers below are all based on the current DLSS patch...i've posted other numbers before from earlier patches, and the trend is generally upwards in terms of FPS as the game is further optimised.
I'm running a 6700k @ 4.6ghz and an RTX2080, at 2560 x 1440.
Performance varies hugely depending on which map you are playing, but picking a map like Rotterdam really shows up the differences. The very start of the first single player mission has a similar effect.
Running everything on ultra settings, inc RTX on ultra, DLSS off = average framerate of around 45fps. G-Sync makes that very playable but its not stunning..
Same scenario but with DLSS on = average framerate of around 65fps. Huge improvement that just runs really well.
If I dial things back a touch to what I was running before this patch - which is everyone on ultra, but with RTX set to "medium", and I would usually get around 70fps on the same map. With DLSS on, that jumps up to an average of 100.
As for image quality - its a complete non issue when playing a fast paced game like this. Sure if you take a screenshot and compare, you will see a blurriness caused by the upscaling..as you'd expect - but when the image is moving? I don't notice it.
Really looking forward to seeing how this looks on Exodus this weekend (on steam, as I pre-ordered before they decided to screw over the player base!). Their RTX implementation looks great so far!
DLSS is only as good as how it was trained and frankly i find this hullabaloo over DLSS a bit odd. You have a chequerbox super sampling system designed to upscale from a lower resolution which can produce representative results of a direct rendered image. But, what people have to remember it's not DLSS giving the performance gain, it's the fact the rasterisation systems are rendering a lower resolution.
If you drop your resolution to the bracket below (4k to 1440p, 1440p to 1080p and 1080p to 720p), do you get similar FPS uplifts?
To me, the only smart thing about DLSS is using a pretty smartly designed ASIC to get a pretty bang on image quality after it.
But when you have bad image quality at 4k, is that because of rendering or because of the technology needing to be improved to give much better fidelity.
But back on topic, DLSS can't be trained on every permutation of system. So i hope that DLSS is not used in benchmarks because it is completely incongruous from systwm to system. Has DLSS bern trained with textures high, medium or low? Has it been trained with LOD far or nearAmbient occlusion on or off etc etc
There's a lot more to the training than just resolution.
But it is a network overlaid onto the screen buffer, and the network they trained doesn't cover your full screen size so the bits at the side would be fuzzy. Maybe you would be OK with that, but many wouldn't.
The alternative that would work is to train for your ultra-wide monitor and then use that network on the more common widescreen. But that is more work for the graphics card, and would no doubt mean a less impressive performance uplift for lower res cards. I can see why they train for every specific resolution.
This tech would also mean that if a game ever gets texture updates, all the learning needs to be re-done. If the task is automated and fast enough that might not be a problem, but I can imagine games not getting texture improvements because it means extra work and a possible delay on release. The more I think about it the more delicate the system seems, it could be the new 3d glasses.
Last edited by DanceswithUnix; 14-02-2019 at 09:05 AM.
Did someone say "cinematic quality experience"
That's 24fps right?
They would totally solve this issue with multiple resolutions/configuration on DLSS by, i dunno, uploading 'samples' from people playing the game on their pc with DLSS off.
(Edit--My bad, forgot DLSS runs off files already created by the farms and not uploading data on the fly)
Sure, those tensor cores aren't being used and they have broadband right (who doesn't have broadband right?), so I'm sure people wouldn't mind allowing nvidia to use some, just some, of their resources during playthroughs to supplement their AI farms using the free tensor cores. It will greatly benefit other players, you'll be doing a service!
You'll be able to opt-out of course on page 14 of the privacy document in that hidden folder there yea.
See! Solved! :-D
Last edited by ValkyrieTsukiko; 14-02-2019 at 12:39 PM. Reason: Confused about DLSS func, updated
Yes, I will give you sh#it for this. I really don't think the feature has anywhere else to go with this demo. This is it, this it all will be.
Ultimately, developers will have to be re-trained in map design in order to get the lighting "right" and some Youtubers have companied it for being "too dark".
There are currently 1 users browsing this thread. (0 members and 1 guests)