Benchmarking clash between NVIDIA and AMD
Taken from HardOCP:
Quote:
You guys know we <got rid of> benchmarks years ago because we found those unreliable as indicators of gaming performance and GPU cross-comparisons. While we have moved on, the GPU companies have not. Yesterday we got an email from NVIDIA about a new H.A.W.X. 2 benchmark and the company encouraged us to use it on upcoming reviews. Today we got an email from AMD asking us not to use it on upcoming products.
It has come to our attention that you may have received an early build of a benchmark based on the upcoming Ubisoft title H.A.W.X. 2. I'm sure you are fully aware that the timing of this benchmark is not coincidental and is an attempt by our competitor to negatively influence your reviews of the AMD Radeon™ HD 6800 series products. We suggest you do not use this benchmark at present as it has known issues with its implementation of DirectX® 11 tessellation and does not serve as a useful indicator of performance for the AMD Radeon™ HD 6800 series. A quick comparison of the performance data in H.A.W.X. 2, with tessellation on, and that of other games/benchmarks will demonstrate how unrepresentative H.A.W.X. 2 performance is of real world performance.
AMD has demonstrated to Ubisoft tessellation performance improvements that benefit all GPUs, but the developer has chosen not to implement them in the preview benchmark. For that reason, we are working on a driver-based solution in time for the final release of the game that improves performance without sacrificing image quality. In the meantime we recommend you hold off using the benchmark as it will not provide a useful measure of performance relative to other DirectX® 11 games using tessellation.
That last paragraph is fairly damning in my opinion and it raises some big questions. Is Ubisoft in NVIDIA's pocket and pushing technology while ignoring the company with the largest DX11 market share? Is NVIDIA pushing a benchmark of a yet-to-be released game that is somewhat broken on its competitors cards? We know NVIDIA's current GPU has more Tessellation power than AMD's latest, but we have yet to see it make a difference in anything besides a benchmark. I think this shows NVIDIA grasping at straws and I don't think it has anything up its sleeve that AMD does not already have as well; like refinements in TSMC's 40nm process.
Sounds odd - it's poor of Ubisoft if they have deliberately hobbled their tesselation on AMD cards. Fairly typical of NVIDIA and their ethics though.
Re: Benchmarking clash between NVIDIA and AMD
Sounds like the tessellation hasn't been correctly optimised so the nvidia cards with more tessellation grunt can cope with it better than the Ati cards (I refuse to call them AMD cards)
The full extent was probably only realised late, however that's not going to stop nvidia who obsiously know about it from pushing this benchmark to make the new Ati card look weak.
It could also be that nvidia has been working closely with Ubisoft to get them to favour code that works better on nvidia cards.
Sort of thing I'd expect from nvidia
Then again it could be Ati using some odd driver code to do the tesselation which might make sense to Ati driver programmers but a lot less sense to the programmers at Ubisoft.
And the problem was only spotted by Ati after the benchmark came out, Ubisoft bussy tring to get the game ready for release don't have time to fix the benchmark atm because they are focused on the game.
Re: Benchmarking clash between NVIDIA and AMD
By the same rule, its not Nvidia's fault they're cards are better at tessellation - all will be told when the seemingly 'finished' tessellation version comes to light.
For comparisons sake, does mean that tessellation in Unigine is broken as well as it runs slippery smooth on GF1xx compared too the 58xx and perhaps 6xxx.
Re: Benchmarking clash between NVIDIA and AMD
I remember when Ubisoft pulled DX10.1 for Assassins Creed. Funnily enough at the time only ATI cards had this and not a single Nvidia card and the DX10.1 patch benefited ATI cards! ;)
Sounds like more Nvidia crap TBH. I suspect that AMD is following the DX11 specification more closely and like the article says that their solution benefits both AMD and Nvidia cards.
I was reading the R900 thread on Beyond3D and it seems that tessellation levels above 10 to 12 are a pointless use of GPU processing power and more epeen ATM - the triangles would be too small to make any realworld difference.
Re: Benchmarking clash between NVIDIA and AMD
Quote:
Originally Posted by
Terbinator
... does mean that tessellation in Unigine is broken as well as it runs slippery smooth on GF1xx compared too the 58xx and perhaps 6xxx.
Actually, if you look back at the hexus reviews the GF100 cards were only significantly better at tesselation when you went for the really high levels, while the majority of the benefits of tesselation come from the low and medium levels of tesselation: and in those tests the 58x0s performance was on a par with their comparitive performance in other benchmarks.
So yeah, nvidia can do epeen levels of tesselation better than Radeon, but for real world benefit they're as good as each other.
Re: Benchmarking clash between NVIDIA and AMD
My point was more that regardless of the benchmark, team green have some in reserve in regards to tessellation.
HAWX2 is a crap game anyway - i don't think too many brows will be raised by the numbers it throws up. :D
Re: Benchmarking clash between NVIDIA and AMD
Quote:
Originally Posted by
CAT-THE-FIFTH
I remember when Ubisoft pulled DX10.1 for Assassins Creed. Funnily enough at the time only ATI cards had this and not a single Nvidia card and the DX10.1 patch benefited ATI cards! ;)
That was my first thought as well. Ubisoft again doing something for nVidia, not a healthy situation for consumers.
Re: Benchmarking clash between NVIDIA and AMD
Quote:
Originally Posted by
Terbinator
My point was more that regardless of the benchmark, team green have some in reserve in regards to tessellation.
HAWX2 is a crap game anyway - i don't think too many brows will be raised by the numbers it throws up. :D
It's not really about having something in reserve. You visually cannot tell the difference between 2 and 16 triangle tessellation yet 2 triangle tessellation slows down all cards by a lot, including nVidias.
Re: Benchmarking clash between NVIDIA and AMD
That's not strictly true, because it's down to how it's implemented, if you've got a very low polygon count model to start with then it will show up.
One of the great things about tessellation is the ability to use it adaptively, however this would require a game to be developed with this in mind from the start so it'd be dx11 only and look rather clunky and dated without tessellation on.
So it's not going to happen any time soon.
As to the amount of tessellation and how it impacts the performance again down to implementation and LOD setting.
A similar situation exists with textures, bad LOD scaling will have a major impact on performance almost regardless of texture size.