its articles like PCGamers one https://www.pcgamer.com/you-can-now-...g-performance/ where it mentionsthat's confusing me about all this ray tracing malarkey.It works with any graphics card or GPU that supports Microsoft's DirectX Raytracing (DXR) API, so basically anything that supports DX12
my 750ti supports DX12, so can I do ray tracing?
we just need actual games with DXR/RTX enabled out there so we can all see what the fuss is about.
as for, they could but not near real time.. you saw what the battlefield patch did with that RTX support once they figured out the kinks. who knows what'll happen?
we just need someone to explain it all in non-techy 'explaining it to your mum' words.
This is rather hilarious.
Nvidia releases a new graphics card, with "next gen" technology - raytracing/DXR - at this price point. Hexus goes crazy, up in arms about how terrible it is and how Nvidia are price gouging.
AMD release a current generation card, without anything new other a sorely needed performance boost and with similar performance to the new Nvidia card, at the same price. General consensus is that its OK and we'll "wait and see".
Bias anyone? I've nothing against AMD personally - they make great budget cards at the moment (previously made the best high end ones too) and the price/performance of their £250 cards is brilliant..for 1080p gaming I wouldn't recommend anything else - but this new Radeon 7 just doesn't fit in the current market.
This is a perfect example of price gouging - why on earth would anyone buy one of these new cards at that price point, when you can buy an Nvidia card with the same or even slightly better performance, for the same money. Oh and you get raytracing chucked in for good measure, making even more of a difference.
It's also completely untrue to state that Raytracing "cuts your FPS in half" - is absolutely doesn't. The current performance hit at 1440p *without* DLSS is about 25%. With DLSS that is expected to drop, although we can't prove that until later this month when BFV gets it's update.
I realise AMD loyalists will probably rage back against me on this, but whilst I do think the RTX2080 is overpriced by £100 or so, the Vega 7 seems to be £250 or so overpriced.
edit: I don't quite get why having 16gb of memory helps gamers at the moment either - at 1440p, the most I have seen my VRam creep up to is 4-5gb, which is fine on an 8GB card like the 2080. At 4K, yes you do start to hit a wall, but even then it's very rare to go past the limits. I'm not aware of any game where 16gb would be a benefit at the moment, but happy to be corrected It does makes a big difference to professional work though, where it definitely is a useful benefit. It seems an odd choice as surely by chopping the memory down to say 11gb, they would save enough to be able to retail this for £450-£500, at which point it makes sense and becomes a good option for many.
Last edited by Spud1; 10-01-2019 at 01:55 PM.
If I remember correctly, NVIDIA came out and said that the 1080Ti could manage around 1.1 Giga-Rays per second, whereas the 2080, with very similar performance in most games, can manage 10 Giga-Rays per second, about 10x better, but, the 10 Giga-Rays aren't processed within the GPU's graphics cores, leaving them free to make up the rest of the image, they're processed in the RT cores which are specialised for drawing light through a picture, backwards.
So yeah, your 750Ti could do it, but you'd likely see framerates in the 1/100th of an FPS or something similar, maybe worse. This is why it's been seen as the holy grail of graphics computing, because NVIDIA have found a way of doing it, to good effect (in my opinion) in "real time."
Films like Iron Man and other high-end blockbusters have used ray tracing for years, but they've spent weeks and months rendering the reflections, whereas with these new technologies, as they progress within a generation or two, we might see something which once took 10 days now take 20 minutes.
In terms of cost to produce it seems the Radeon VII will cost more to manufacture than the RTX 2080, to a point of I think AMD need a high price to make any kind of profit.
In many ways I think this card only exists because of the nvidia price gouging so it meant they could release this at a high enough price to make sense, if the RTX 2080 wasn't price gouging there would be no place for this card to make any kind of sense. In short AMD have been gifted a window to compete while they get navi ready.
Not a big problem to me as I am not buying either card at that kind of money.
Firejack (10-01-2019)
Possibly.
Vega 56 power around 223W, but you can choose between 160 and 310 as these things undervolt nicely but can also overclock to mental amounts of power (with not much performance gain).
https://www.tomshardware.co.uk/radeo...-33997-21.html
Vega is more power efficient than RX580, and given Ebuyer will sell you a Vega 56 for £320 makes the RX590 at £250 seem like a poor deal. Nice to see the 570 at £150 though.
So this Radeon VII card is twice the price of a Vega 56. I just don't get high end graphics cards, I'm clearly not the intended market.
The issue with Nvidias release was how obvious it was rushed to market. No games to support it at all, no support in Windows whatsoever, not even an Nvidia driver to activate the RTX components. That's running a bluff in a Texas poker game when you have a 2 pair and hoping on the 4th and 5th card flop you might get a three of a kind or better. Luckily they did get better results as time wore on but with sacrifices.
AMD are using a level playing field GPU for this release, Nvidia are using targeted hardware for niche gaming areas but based on an existing technology and calling it the mutts nuts.
This is not an example of price gouging by AMD, the Vega lines are genuinely expensive to manufacture, I'm surprised they've doubled the HBM and bandwidth and weren't more expensive.
The 16GB is for the 4k framebuffer at extreme detail which requires high bandwidth (GDDR5X and GDDR6) were needed to start gathering 4k performance) so HBM is perfectly suited for 4k and HBM needs to be in 4 or 8GB configurations and there is no point cutting short at 12GB. Can someone correct me if i'm wrong, I think it's actually because HBM2 is 8GB per stack so you would either have 8GB or 16GB and 8GB is just not enough for high detail 4k.
With the new benchmarks that have been released, the Randeon VII is on the top end of a 2080 and the bottom end of a 2080ti for 4k performance. That wholly places AMDs flagship GPU at the number 2 spot for consumer GPU purchases at the same price as a 2080. That's not bad tbh.
That's the difference here between Nvidias release and AMDs release.
Like I said, people rushing to defend AMD...its interesting. The RTX line is also genuinely expensive to manufacturer, and the R&D costs were likely huge.
If this has been the other way around, people would be slamming Nvidia for releasing a card of the same power (to all intents and purposes), with the same performance, but missing all the new features. Because it's the traditional underdog doing so (AMD rather than Nvidia), people rush to defend them.
Its odd.
Sadly I think alot of people are going to miss the point with this card. There is not really a value proposition unless overclocking is stable at 1800Mhz+ on water as it misses the "ray tracing support". But we WILL see ray-tracing as a software product within AMD's Navi thanks to open cl and Vulcan. Ive said this since doom 2016, but we need devs on Vulcan but that is pretty hard. but if new Vega can push out the performance and is a beast as a workstation card, we can't complain as it isn't really made for the average consumer. I wouldn't say any card over £400 is consumer friendly. Value for me is life over performance so when we see a gtx 1160 or a navi card priced at £300 without these features that are for marketing, I'll invest as it will have to last me 4 years and provide frame rates in resolutions that the majority plays at.
and that's a problem?
I was happy running crisis at 14fps in low detail back when it first came out. 1fps slideshow is an improvement
and you know I will be testing it if nvidia enable it in the drivers, and ill try it on that 7770 too, just because.
I only mentioned DXR originally because we've as yet only seen 1 of the realtime ray tracings out in the public, in BFV. we've not seen DXR yet, so how do we know which one is the better long term tech?
people are going on and on about how its must-have tech and not worth buying anything without it. we've only seen 1 game. it may end up like Physx. or even gsync, now that gsync works on freesync monitors, why pay the extra few hundred £ for gsync monitor when the same thing works on cheaper freesync version?
just treat the RTX cores as a nice bonus feature for now, and buy whichever card floats your boat based on all the other features, like price, 4K fps and such.
I think your opinion of this situation is tainted by your dislike of peoples preference of AMD or Nvidia.
I would like you to go through what I said and provide a reasonable response as to why anything i said was wrong or just mindless defense of AMD.
Tensor cores already existed in Volta but were refined then bolted onto a Volta/Pascal architecture. RT Cores were just specialised/focused Tensor cores.
Not much R&D required tbh except to make the system work together. The R&D costs were already done in Volta.
I don't think you can get 0.5TB/s out of a single stack, so for the quoted 1TB/s mem bandwidth I would guess this is 4 stacks of 4GB. That also ties in nicely with the Vega cards being 2 stacks of 4GB, and that might be the real reason for 16GB. If AMD are already purchasing HBM2 4GB stacks in some quantity, it could be that buying 2GB stacks to make this an 8GB card would be buying a part at low quantity and hence at a price premium. It might just not save significant money to go 8GB.
Last edited by DanceswithUnix; 10-01-2019 at 04:57 PM. Reason: s/wouldn't/would/
Iota (10-01-2019)
R&D costs, being expensive to manufacturer or even being better mean nothing though if no one adopts your new thing, hydrogen fuel cells in cars, Betamax, and the Virtual Boy were all either better, ahead of their time, or something we think should've been more successful but all that means nothing as sometimes new and better just doesn't take off.
Nvidia are trying to reinvent the wheel and that maybe the future, who knows, on the other hand AMD have taken what we already know and made it a bit better, it's a safe bet.
OK, as you asked...
I think that is an opinion - it's hard to get developers engaged in building support for a new feature (in gaming) without the hardware out in the wild. Yes it may have been released 6 months before partners were truly ready (and maybe even Nvidia, given the early hardware issues with the 2080ti line) but I don't think that's strictly relevant to how the cards actually perform.
True enough, but to gamers does it really matter? When people are looking for raw performance and graphical fidelity, which I would suggest that most are, then where the technology has come from isn't of interest.Originally Posted by Tabbykatze
OK - but to consumers it feels that way - they are being asked to pay the same price for Vega 7 as they are for a card that offers equivalent performance and more features - with no discernible benefit for taking the AMD option. 16GB of memory isn't of practical use to the vast majority of gamers, whereas Raytracing and DLSS both are.Originally Posted by Tabbykatze
That all depends on whether you value Raytracing/DLSS or not. Personally I would not have bought an RTX2080 if raytracing has not been a thing - the cards are stupidly expensive without the added benefit that brings (maybe even with it ), and I would still be running an older generation card for a long time to come if they had no included it. Thats what I don't get about this - AMD are not bringing anything to the table to justify the price tag they have on these cards. Putting aside any preference between the two companies and looking at this logically in today's market, I cannot see why you would take the AMD option at the moment unless you specifically needed that sort of memory..which the vast majority of people, even those gaming at 4k, don't need.Originally Posted by Tabbykatze
I look at things similar in the CPU market. If i needed a powerful CPU with lots of cores/threads, then the only sensible option is a Ryzen CPU...Intel don't compete there at the moment. If you want single thread performance however, then Intel tends to win out in most situations, albeit often at a higher price.
If you want to buy one, or think they are the better option, then thats fine...each to their own. I can only offer my opinion, and am also pointing out again the obvious bias towards AMD that exists on this forum and has done for many years.
I say there is a discernible benefit for taking the AMD option, supposedly you get better performance in Vulkan titles. Obviously we can also say a similar thing for the Nvidia option as you're getting RTX. The thing is though with the AMD option you're getting more (FPS) of a known thing (resolution, graphics settings AA) whereas with the Nvidia option you're giving up some of that known thing (FPS) for something unknown (RTX).
The biggest problem Nvidia have is convincing people RTX is worth giving up on something they've held dear to their hearts and have been acutely aware of since the dawn of 3D gaming in exchange for something that their not even sure they want.
There are currently 2 users browsing this thread. (0 members and 2 guests)