Well I'm really not just repeating what anyone else has said, I'm saying things how I see them having rejoined the tail end of the discussion. I see a lot of strange conspiracies and reasoning which makes little objective sense to me, so maybe that's where we agree?
But you are repeating what's already been said, let's see if I can sum up what the last 11 pages seem to just keep ploughing up in one form or another.
A SINGLE game/benchmark that's still in alpha, that spent the majority of it's life being developed on an API used by a single semiconductor company, that the developer said is lacking some optimisations in DX12 that would be present in the drivers of DX11, that depends massively on the developers efforts in optimising DX12 code for particular hardware, shows big increase for the aforementioned semiconductor company when comparing DX11 to DX12, and a decrease in DX12 performance when compared to DX11 for a semiconductor company that the developer has only spent 6-12 month optimising for.
Now it can be argued it's because of X, or Y, but in the end with only a single game isn't it a little early to draw any conclusions? Far from strange conspiracies and reasoning which makes little objective sense, if you'd spent the time its taken to argue a point to instead read what's already been said, you would've seen that far from strange conspiracies or biased conclusions being made that most people have been saying it's to early to call it either way, and that by the time it's known it will probably make little difference.
The strange conspiracies I'm referring to are the suggestions that it's somehow sabotaged to run worse on Nvidia hardware, that well-known review sites would all go out of their way to make it run worse by enabling some option or other. It seems it started as pointing the blame at an MSAA bug - so despite the fact the developers have stated that that is misinformation, sites have still ran it with MSAA disabled to prove otherwise and avoid being accused of gaming the results, which it seems has happened anyway.
Strange reasoning would include grouping all AA methods together; if it's not MSAA then they must have enabled some other AA to skew the results, which TBH completely ignores the fact that different AA methods are performed completely differently so that makes no sense either. And TBH something is drastically wrong if any functioning AA implementation is destroying performance as much as those numbers show, especially popular post-fx methods like FXAA. If it were some option killing performance as much as it was, wouldn't some site have discovered it by now, disabled it, and posted the results? It wouldn't be terribly hard to do and they'd probably attract a lot of traffic.
Just to be clear, I have made no claims that this is some ultimate benchmark of DX11 vs DX12 or AMD vs Nvidia performance - just that many of the theories I've seen in this thread and elsewhere trying to downplay it are deeply flawed.
Perhaps I'm missing something then because I can't recall anyone saying it was deliberately made to run worse on Nvidia hardware, other than the developer has probably spent less time optimizing the game/benchmark for Nvidia hardware.
I also don't recall anyone saying review sites have enabled some option to make it run worse, although without knowing what the supposed MSAA bug involves how would anyone know if they did, or didn't, that's why to rule any possibility of bias (imho) ALL forms of AA should have been disabled.
The developers first said there was a bug in MSAA and then changed their mind and said there wasn't, under those circumstances I would say the developer is the one spreading misinformation, but without knowing either way the only way to be sure (imho) would be to run the test with ALL forms of AA disabled, yes different AA methods are performed completely differently but without knowing the details of what the supposed MSAA bug is how would you know it's not effecting other forms of AA, simple matter is you don't.
The only reason people are trying to downplay it is because the fanbois like Jimbo75 HAVE taken this single alpha release of a game/benchmark as the ultimate benchmark of DX11 vs DX12 or AMD vs Nvidia performance by saying things like, and I quote "because AMD hardware (especially GCN) has the highest market share", "we only hear about Nvidia's involvement when they're complaining about something", "It's testament to the sad state of the current tech press when WCCFtech are making articles 100x better than Hexus.", that people "don't want to see evidence", that "GameWorks is designed to gimp GCN *and* Kepler" and that "every game tainted by it so far has been wrecked by it".
We have doozy's like the fanbois saying Nvidia has "(under 1/3rd) in market share", that "faux tech "enthusiasts" should be embarrassed", that the 80% of people that this very article says owns discrete Nvidia GPUs are "sheeple being marketed into believing they made the informed choice"
And then we have out right falsehoods like "DX12 is Mantle rebadged" that "If not for AMD [we] wouldn't have DX12"
Then again if you took the time to read this thread you would seen the utter drivel being spouted for yourself, and how this alpha release of a game/benchmark has been touted as the ultimate benchmark of DX11 vs DX12 or AMD vs Nvidia performance, that's why before starting this conversation with you I said the following..
You don't seem to be past the point of caring, just saying
True, I guess I just expected someone who says they came into the debate fresh, without having read most of the thread to not go around accuse people of clutching at straws, or that people are coming up with strange conspiracies and reasoning which makes little objective sense.
corky - a reviewer will mention if they are using AA - so when they say ` we turned MSAA OFF` , with no ` and replaced it with` - it means they are not using ANYTHING ELSE>
I have to say that I agree with watercooled and HalloweenJack on that point. Disabling a feature doesn't imply switching it for another.
Before making my last post, I actually decided to take a brief look at Dell, HP and Lenovo site. I am actually surprised by the range of product they I have, and frankly could not be bothered clicking through all of them. Besides, I don't know which product sell most. I am sure it isn't going to be the high-end/gaming products, but I am less sure if people tend to go for the cheaper one or the mid-range one.
At a glance, there seem there are a lot more nVidia graphic cards, though it is not quite as bad as trying to find an AMD CPU in the past. For instance in the Alienware, they have a bunch of nVidia options to choose from, and one AMD card. Still, I am surprised to see that occasionally, the AMD card is used for the higher end system of two. For instance, in the US (but not UK), the Dell XPS 8700 desktop comes in two flavours. The standard version comes with an nVidia card, but the more expensive "Special Edition" uses the AMD card. And there is another similar example for the HP Envy line (though in that case, they didn't call it "Special Edition".. it is just the more expensive one).
I am guessing that the general public who make the bulk of the buyers don't actually have an opinion between AMD and nVidia though. Those who might be a little curious might ask their nerdy friends (that would be us). Most, will probably buy the one that fit the closest to their budget. My folks used to ask my opinion when they want a new laptop, but in recent years, they just go with a brand (system, not component) they trust and pay what they feel is reasonable. More often than not, it'll be an onboard graphic card but every now and then, it'll have a low end nVidia (nVidia pretty much own laptops).
Last edited by TooNice; 30-08-2015 at 10:07 AM.
Let's just put all of Corky's nonsense to bed with one post.
http://www.overclock.net/t/1569897/v...#post_24356995
Kollock, Oxide: I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMDKollock, Oxide: Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant.Kollock, Oxide: The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance.Kollock, Oxide: I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.Kollock, Oxide: Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC.Kollock, Oxide: In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so.You've just been completely and utterly destroyed Corky. Everything - literally everything you attempted to spin about this has been proven a lie. Everything I told you has been true. Nvidia, when they lose first of all try to cheat, when that fails they resort to lies instead.Kollock, Oxide:--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.
You bought a graphics card that is poorly equipped for DX12, from a company that treats you and everybody else like crap. Just accept your error and move on.
Still pedaling the BS that a single game/benchmark is the ultimate benchmark of DX11 vs DX12 or AMD vs Nvidia performance then Jimbo75.
You or anyone else can pick apart anything or everything I've said but ultimately everything I've said was with the intention of showing you how flawed your thinking was, how you had lost all objectivity because your hate for Nvidia and love of AMD had clouded your judgment, and I stand by that no matter what you or a representative from a company that was chosen to be the poster child for the now defunct Mantle say.
https://www.reddit.com/r/AdvancedMic...g_dx12/cuklm4j
nVidia: bruh, just disable stuff on your demo so we come on top and we will make it worthwhile?
Oxide: wat? No ... hell no wtf is wrong with you
AMD: Get rekt n00b
nVidia PR: "This demo has bugs, it's not representing correct figures".
If you read that without your rose tinted glasses you would see that's not what he says at all, what he says is that console guys are getting 30% GPU performance by using Async Compute, something that contrary to what the Oxide representative seems to think is supported by Maxwell, and this PDF (page 31) explains why they got such a performance hit when using it.
Basically Kepler & Maxwell has 1 pipeline which can handle a lot of compute or 1 graphics queues, but it can't do them at the same time without a performance penalty, such a design works fine for DX11 because prior to DX12 there was no way for rendering to occur simultaneously with compute, so there was no need for parallel pipeline/engines like AMD did with GCN.All our GPUs for the last several years do context switches at draw call boundaries. So when the GPU wants to switch contexts, it has to wait for the current draw call to finish first.
TBH I would've expected a developer that's working with an API that gives them greater control of how the hardware deals with their code to have known this, it just goes to show how little (imho) Oxide and yourself understand about the subject your discussing, rather than being objective you and Oxide prefer to be sensationalist.
Corky - are you in the industry?
There are currently 1 users browsing this thread. (0 members and 1 guests)