A few comments on here about the potential abilities of DLSS at 1080p or 1440p, I thought DLSS was only going to be available at 4k?
A few comments on here about the potential abilities of DLSS at 1080p or 1440p, I thought DLSS was only going to be available at 4k?
It is up to the game developers as to which resolutions DLSS is trained towards (they pay Nvidia to run the game through their deep learning machine), I have a feeling FF was 4k only because development was halted before it was finished. Apparently BFV will allow DLSS at 1080, 1440 and 4k, but that will be confirmed when we get the patch. My thinking is DLSS is what is going to save the 2060 from being a useless card for ray tracing, and may allow it to run a 4k monitor without DXR.
I'm for one finding it interesting to hear your opinions, given you seem to be the target market
That's always been the way though. DX10 required people buying Vista, DX9 needed people to buy an Fx5800 when people didn't like either. But usually those products were seen as necessary stepping stones to what was obviously the future, whereas people don't seem convinced that the 2080 feature set is where the future lies or like the 5800 it is worth waiting for the next gen to come out.
It's a clever technique, but as someone who often goes back and plays older games I have to wonder if it will trip itself up as driver and DirectX changes alter the rendering and make the learnt renderings of the game less useful and more likely to artefact. Time will tell if it works at any resolution for the likes of me
Very true - I have found Raytracing really impressive, but its one of those things that I didn't really appreciate was "missing" until I tried it out. I had never realised for example the fact that game engines only reflect what the player sees on screen at any one point (Planar reflections excluded) - and that things off-screen were just not reflected. Now that i've seen off-screen reflections working in BFV though I notice them missing all over the place in other games. Really looking forward to the implementation in Tomb raider for example, if they can pull it off in a sensible way. Off-screen reflections are only one part of the magic of course, but its something that has stood out for me..and how much better it is than the old Planar trick.
The initial badly optimised release of BFV's Raytracing didn't really help either (where you *did* see a 50% hit to FPS), even if no one should really have been surprised at an EA game being poorly optimised at launch
edit: I am aware that there are many tricks that can be used to simulate off-screen reflections, but most produce poor results or are very expensive to use..and are very different to the results that Raytracing can produce.
Last edited by Spud1; 10-01-2019 at 06:14 PM. Reason: off screen reflections clarification
Wish they'd do something about the power efficiency.
HBM2 packages are either 8GB or 4GB. I'm not sure you could mix and match the packages to get 12GB and also doubt it would be cost effective to do so, when you would probably get much better pricing buying the 4GB / 8GB HBM2 packages in bulk. It certainly wouldn't lower the costs by £200, and simply going with 8GB defeats the point of pushing the card as a 4K capable contender when it's likely true 4K textures would probably eat into available memory quite quickly.
Isn't that why Nvidia went with an 11GB configuration? Purely so the card would be able to cope better at 4K?
Edit: As mentioned, looks like they've gone with 4 x 4GB HBM2 stacks for the 1TB bandwidth, otherwise it would be ~600Mbps with 2 x 8GB stacks.
Last edited by Iota; 10-01-2019 at 08:27 PM.
Tabbykatze (11-01-2019)
The Radeon VII looks nice, but I was hoping it would be around the price of the RTX 2070. I guess having 8GB of HBM2 would lower the performance too much.
It's a marketing decision and nothing more. The technical answer would be to populate all 12 memory channels, but that gives you 12GB which is same as Titan so they won't want that. You could fill 4 channels with 1GB chips and 8 channels with 0.5GB chips giving a total of 8GB, but that isn't very special so probably limits the selling price regardless of what the performance would be like. So 11GB makes it not a Titan, but gives owners some e-peen over upper-mid range 8GB card owners. Kind of makes sense.
As for HBM2 packages, AIUI they come in 1GB die so the current AMD range uses parts with a 4 die stack to give 4GB but parts with only 2 die stacked are available. However, it seems that HBM2 is no longer the exotic and rare thing it once was, so why skimp on it and produce something where people can look at tables of numbers and say "That only has as much ram as an old 1080".
Thank you for taking the time to answer my points.
The problem is release mentality and what people see at release. If they see no games, no support and difficulties then they have a negative opinion of the launch. The cards performed better than the 10xx series at launch which everyone were happy with but they were told by Nvidia to wait to see if the near double price is worth the investment on features that couldn't be tested or quantified. That is how the human mentality works, sadly.
Be that as it may, there are other instances where public opinion is soured when someone makes a feature that already exists and calls it "new". Take Apple for instance, so many of their "features" are not new and were implemented by competitors a long time ago so it is a sticking point. What Nvidia did is make a "new" feature from old tech and tried not calling a kettle black. These two features that they headlined were ASIC based light and reflections and Checkerboard Supersampling. Both are great features when done well but when someone says to you they are a god and then you watch them bleed (reducing Ray numbers in BFV to fix perf issues), they start to question whether they made the right choice and those on the fence fall off it.
Are Ray Tracing and DLSS a disernible benefit? Is it a quantifiable benefit right now? We have two games that show it off with a promise of more but only if the developers go ahead with it and lock themselves into the Nvidia ecosystem. Of those two games, BFV has been struggling and implements only one of the features (downgraded) and Final Fantasys implementation of DLSS is only for the 4K elites (sucks to be you 2070 and 2080 users). AMD has far more benefits in DX12 and Vulkan implementations over Nvidia of which the Vega series of GPUs smash the DX12 implementations happily. Is that not a benefit?
I don't disagree that the price tag is quite high, I have made my opinion on that known, I think they should have targeted the 550-600 price range but unfortunately it is a great set of technologies that are expensive to produce. The Vega series is not a bad series of GPUs if we omit the toastiness for a moment, they perform as expected, consistently and happily in their applications.
If justifying a price tag has to be brand new headline features then no one should have been buying a new GPU for the past 5 years as there has been no real headline feature development. Ray Tracing is a tiny element of a game and I find it quite odd because games like Dayz on low settings has amazing lighting effects, same with Arma 3, but they don't need specialised hardware to do it. So Ray Tracing Cores (as in, a rushed implementation of tensor cores specialised for one thing) mean nothing to me. Frankly, the whole Tensor implementation should have just been one massive Tensor system which can be dynamically used for anything (including ray tracing).
Until benchmarks are released, we don't know the memory utilisation of the Vega VII but what we do know is if a game has 4K texture implementation, it can stress even the 11GB of a 1080ti/2080ti so 16GB is an operable figure.
You say bias, others say Nvidia are fleecing their customers into the ground, others don't care except for what gives the best performance regardless of price. Me, I'm a generalised compute kind of guy, if I have to spend big bucks for an ASIC to get a small increase then I won't do it. So AMD equipment suits me better.
I think the solo purpose PC gaming is shrinking thanks to modern phones, tablets and consoles. Back in the day to experience gaming on a different platform than a console/arcade ...PS1 etc... you just had to get a PC but nowadays you can get good 3D quality graphics on a phone/tablet. Parents who buy pcs for gaming are extremely few but they would save for a console coz consoles are cheaper and easier to manage..no updates, easy to use etc. AMD has seen the future and that's why they don't rush to pc gaming but focus on the mid to low end range coz thats where the money is. Personally I won't buy any card for gaming purposes above $300 coz I still think gaming graphics are too 'artificial' and not life like. Hype train on technology is good coz why not? cost of high end tech does trickle down over the years and does help with innovation.
As far as I'm aware, NVIDIA aim to bring this to 1080p, 1440p and 2160p, but we've yet to see anything really. Battlefield V, from what I've heard, are aiming to bring it into their game within this month, so fingers crossed for that! However, I'm now left worrying that I might've messed up.
I've just gone out and bought a 3440 x 1440 monitor and moved my 1440p to a portrait position. Am I now going to be without DLSS because I've bought an "unpopular" resolution or will I be covered by the 1440p aspect of the resolution? It's a niggling thought and it worries me.
Was surprised to see this getting a consumer release.
Bit of a mixed bag. Seems too expensive for the likely performance. Still good for those trying to avoid Nvidia where possible - people who don't like the way they operate or who still remembers being stung by their inaction for the millions faulty solder parts the sold a few years ago.
Does seem to show that the original Vega had a poor choice for ROPs and was possible bandwidth starved (although they might have other reason going from 2048-bit to 4096-bit HBM2).
But the other suspicion is that resource-starved AMD keen to break into the high end gCompute market got Raja the Radeon Group to produce a design heavily focused on gCompute but had neither the research budget or volume to adapt the design for gaming with gaming-centric designs like Nvidia do with their designs like GP100 vs GP102 etc.
AMD either need a much higher budget to be able to do two lines one aimed at gaming the other at gCompute, or find a way to apply the Zen/Ryzen chiplet idea to GPUs.
Navi onwards should have some of these ideas.
EDIT:
https://www.anandtech.com/show/13852...e-as-ryzen2000
Seems that their will not be any monster APUs soon. Maybe later if the do a smallish chiplet for Navi.
Last edited by kompukare; 11-01-2019 at 09:11 PM.
Yes, for all the "meh" responses, ~ 25% performance boost with less shaders isn't exactly to be sniffed at. Wonder if Navi will see the same rebalancing...
"We were told that there will be Zen 2 processors with integrated graphics, presumably coming out much later after the desktop processors, but built in a different design."
Money on a larger IO chip with IGP fabbed on 14nm and glued to a single chiplet? Could be a relatively easy way to keep GF happy with the WSA and we know that Vega fabs with a *very* nice power curve on 14nm...
That is possible, graphics is going to be way more sensitive to being away from the memory controllers than CPU so a graphics chiplet seems unlikely. One of my key thoughts on the other thread where I wondered if the I/O chip would have an IGP in it was that a single solution for all cases is often a preferable way forwards. Along with a personal hope, I have built too many compute servers that used a cheap £30 graphics card and having just basic integrated graphics would have been really nice. I also couldn't see how that second location for a CPU chiplet could also take a graphics chiplet unless you create a new package for that purpose.
TBH having read that link my money now would be on a standard APU just like the 2400g but on 7nm to allow more compute. We know from Radeon VII that Vega can clock way higher on 7nm for the same power so the GPU bit wants to be on 7nm, and because GPUs don't cache well it needs to be with the memory controllers. CPUs need to be on 7nm as well, so now I think it's just going to be a 7nm chip.
Still find Radeon VII an odd name, but I guess "Vega 60" would sound like a step backwards. Tempted to call it that anyway
It maybe the way I've read that but it seems like you're saying GPUs are sensitive to latency, what with the distance and cache comments.
With GPUs bandwidth is more important than latency so being away from the memory controller and a fast cache isn't as important as it would be for a CPU, you may want your memory close to a GPU because of signal strength (more traces equals thinner traces equal lower drive power equal shorter distances), in an ideal world AMD would use a GPU block, run traces through the substrate to a small block of HBM, and then connect that to an I/O die.
"It is likely that AMD will use more power for the same performance - 300W vs. 250W - and lack any forward-looking ray tracing support, but the Radeon VII deal is sweetened a little with the knowledge it will ship with codes for upcoming Devil May Cry 5, The Division 2 and Resident Evil 2 titles."
I prefer more future proof tech, than free games, but maybe that's just me. Less watts too, and 50w (likely diff) 8hrs a day x 365 gaming is $19 a year (~.12, many pay far more for electricity, .2 in some states in USA, .24 even), so do the math on TCO for a card you'll use (for many ppl) for ~5-7yrs. Only the rich buy cards yearly, the rest of us probably live a little longer with $700+ cards. 5 years is a free $100 anyway (up to double that if .24 etc), so buy the 3 games yourself, and enjoy a better gaming experience on NV 2080 for ages after you buy it. Then again most don't even ponder TCO, so maybe some will buy this anyway. I can't wait to save $20-40 a year just for buying a 27-30in new monitor vs. my old dell 2407wfp-hc. It's like gsync for free for me as my Dell 24 is 11yrs old! Hard to believe a new 27in would essentially buy me a 2060 free if the new monitor last even 10yrs (summer watts higher here too ~.15 or so). It is easy to hit 40hrs a week gaming if you have a kid and you play too. I can put in 20 on a weekend if a great game hits, and another 20 during the week sometimes if time allows. Kids can dwarf this...LOL The monitor watts are the same gaming or browsing, vid watts only affect gaming or work (more time idle for most I'd guess).
I'm thinking this is a tough sell at anything above $550, so they should have went with 8-12GB GDDR5x to cheapen things up. It's just not a 2080, period. Why they AGAIN chose HBM to screw their top card is beyond me. It does NOTHING, and 16GB isn't needed by anything, nor is HBM unless server. 2080ti proves you don't need this bandwidth either. Again, they just screwed themselves out of much profit, as most will just go NV for features+watts. If they had went with cheaper mem, I wouldn't be able to post this critique and they'd still make money. Most devs are not even aiming at 11GB yet.
A 12Gb GDDR5x is all that was needed to enable $550-600 which would sell MUCH better against a card that will look far better over time with RT+DLSS and even VRS, all of which make things a much better experience for gamers (not to mention watt cost TCO). 16GB HBM+no new features won't win ~same price. I don't get why you would buy this as a gamer vs. 2080. Price drops will be tough for AMD here too, as they definitely don't have much room to drop with expensive 16GB on them. Are they trying to fail on purpose? I couldn't imagine trying to sell this vs. 2080. I think Jen's response was correct unfortunately (I own AMD stock...LOL) and I expect nothing to be released in response other than a new driver or something from NV if anything at all...Nothing scary here for NV Net income. Even with no monitor support still a tough sell, but that is over now with new NV drivers coming. Want your adaptive sync, you can have it now. Again, what is the selling point today, or tomorrow? This is a 1440p card at best (who likes turning stuff down? not what devs wanted), so again pointless to castrate your income with HBM.
I can hear someone saying, but, but, but, it's a 4k card...LOL:
NOTE: AMD gave the presentation showing Forza 4 horizon in 1080p at 100fps with this card. So even they are tacitly admitting it isn't even a 1440p card in their own demo. I agree and want all details on ALWAYS. It will take NV's BIG 7nm core coming, to hit 4K for real IMHO (700mm^2+ at 7nm that is, or two cards?) and 4K is used by <1.5% of 125million steam users. Who cares. If you're not hitting 60+ fps, you're going to end up under 30 mins too much for my taste (forget multiplayer at 30fps). I like AMD's idea, 100fps, or upgrade your card I won't be turning crap up or down in every game then just to get passable gameplay. I hate that crap. Just give me a card that does everything with EVERYTHING on, in well, EVERYTHING...LOL. I grit my teeth the second I have to downgrade settings to get my fun back. That isn't what the dev wanted me to see at that point right? When 7nm NV hits all cards will come with RTX features top to bottom, and that will mean 65mil sold next year with RTX stuff from NV. Devs are shooting there now, not later as they know what is coming. I hope AMD hurries with answers.
That said, more happy on the cpu side of these announcements, as the watts are VERY good IMHO if true. That will sell easily vs. Intel if perf is the same in games (the part that held me up 2018) and better in many apps (already sold on). I can't wait for 12 core, which is what I need for the main PC, 8 cores good for my HTPC's. I also think they have more room to go above intel perf here. That's a LOT of watts to play with AMD has left right? I smell a sangbag here, and think AMD will launch better than equal perf, but I'm just guessing (that's a lot of watts off!). Not surprised though, we're talking a pretty good 7nm process (millions of apple chips shipped already) vs. Intel 14nm++. That is a lot of ground for Intel to make up and these numbers show it. I mean 45w is half the watts of 95w Intel procs. Again, I say that is a freaking lot of watts AMD has to run the score up. Even if it's half that, it's a lot to play with for US or AMD to just clock up out of the box. I suggest THEY do it and CHARGE for it. Put cores on TOP of Intel prices! I think wccftech guy was right here, it's probably 4.6ghz all core, and can probably easily hit 5ghz all core potentially for even more perf. Intel has a problem IMHO for 12-18 months at least until 7nm hits vs. tsmc 5nm (TSMC could screw that up, intel going much less aggressive for 7nm)
There are currently 3 users browsing this thread. (0 members and 3 guests)