Read more.Quote:
Designed for one purpose: to dethrone GeForce GTX 680.
Printable View
Read more.Quote:
Designed for one purpose: to dethrone GeForce GTX 680.
I propably live in the past but seriously..who cares about 5 to 10 fps here or there? Whatever happened to ''claiming the throne'' with 30-40% more GPU power than competition? Forgive my mood but AMD, this is quite pathetic attempt to re-claim teh mastery..
Wrong. (Or maybe I'm missing something in your article?)Quote:
The trio boost the GPU clock from 925MHz to at least 1,000MHz and, depending upon card, inch up the memory from an effective 5,500MHz to, say, 5,700MHz. The lean overclocking indirectly confirms that the Tahiti GPU doesn't have a whole heap of headroom, right?
The vanilla 7970 is just about the easiest card to overclock in existence. Even AMD confirmed they were a bit timid on the clocks. As long as you bump the power control setting up to +20%, you can just bump the GPU clock up to 1125 without even thinking about it. No voltage modification required. The memory clock is just as easy. I've had my card sitting stable for hours at 1175/1625 with no voltage modification. That's a 27% and 18% overclock on the GPU and memory respectively with no effort required other than moving three sliders to the right. Just an amazing card. And for the £330 quid I bought it, more powerful and a lot less money than a 680. Uses a *lot* more juice though!
http://www.guru3d.com/article/radeon...rclock-guide/1
http://www.anandtech.com/show/5458/t...ing-and-msaa/2
You might want to ask AIBs about their reluctance to really, really push the GPU core. The way you overclock and the way they have to qualify products is entirely different. It also depends on when you bought your card; the first batch wasn't brilliant in terms of overclocking.
So ATI are testing all the GPUs, taking all the best ones that overclock the most, and charging extra for them just so they can say they are the fastest.
Kinda screwing over the overclocking scene aren't you?
ATI doesnt exist anymore and hasnt done for 4 years - its AMD now ^^
btw you might want to check the writing vs chart for teh crysis 2 page , article says aAMD is 7% faster whereas the chart says the GTX is fastest ;)
Not at all. Now you know that if you want a chip capable of getting higher frequencies you can buy it as a ghz edition. If you don't need the higher frequencies you can buy a normal edition.
If you like, you could buy a ghz edition card, then set its clocks to the normal edition, then overclock it! ;)
Introducing the boost and advanced powertune features is very interesting - that's a quick turn around.
Until we get the non-reference cards,it will be hard to say how much higher clocked,the best retail cards will be. Moreover,when users get these cards we might get an impression of how well the HD7970 V2 overclocks.
Regarding the overclocking ability of the HD7970 V1. As we know overclocking headroom is not a given,but at least from what I have seen on multiple forums,the HD7970 V1 does overclock a decent amount.1050MHZ is not a hard clock for it to reach. Moreover,I would argue the reason,why third party cards are clocked lower is to reduce power consumption and noise,and make it easier for them to validate almost all their chips,ie,they are being conservative.
The thing is thing is though,if the HD7970 V2 is now running at around 1000MHZ to 1050MHZ,what clockspeeds will the pre-overclocked cards be running at?? There could be cards hitting nearly 1.2GHZ!
OTH,with the HD7970 V2 having similar performance to a GTX680(slightly better in multi-monitor situations it seems but with higher load power consumption), it should mean more competition in that part of the market,and hopefully better prices.
I was reading the Anandtech review and despite their misgivings about the noise,they said it has been the first time in six years that AMD has managed to effectively compete at the single GPU high end with Nvidia.
cheers - was slightly confused to reading something when the graph showed something different :D
can i ask - could you add in the first gen DX11 cards performance? maybe the gtx 480 and hd 5870? to show just how much faster the latest and great really is?
Granted - I have a sample of 1, and they have many thousands that they need to guarantee are 100% stable for their entire life-cycle. I'm still pretty blown-away with how easy it is to overclock though. I certainly don't keep the clocks as high as I stated at all times - I turn them up/down depending on if I'm gaming or not. Not to mention that I've watercooled it :mrgreen:
I'd have thought another major issue is the power usage. It really does take a frightning amount of juice this card!
Are they running the £5 per core cashback on this too? :woowoo:
Mine came factory overclocked at 1GHz, but goes up to 1125MHz (the maximum Overdrive allows) without any problem. The newer Asus 7970s (v1) are either voltage locked or have different hardware that can't yet be software controlled, so I haven't been able to try a higher voltage, BUT the voltage on this card is only 1.080v, which is really pretty low for a stable 1125MHz.
Teppic - give Sapphire Trixx a go. It works with non-Sapphire cards I think. It's pretty good, and it allows you to go higher than the Catalyst control centre does.
https://www.sapphireselectclub.com/ssc/TriXX/TriXX.aspx
That is a known issue with the Asus cards i think.
not bad really, quick to turn around that software powerboost so good stuff there. Can i ask what the stock voltage was for your card? I might have missed it but i didnt see it and it was said that these would be coming in at 1ghz with a much lower voltage (i.e strong high end chips, better yields!). I can easily see 1.2ghz models from partners and thats a good thing, i think you're a bit harsh on the comment on TDP, assuming it was a gpu only test (again missed description but it seems very low now :D) as its very small and it clearly is due to overclocking from 1ghz to 1,050! And may i point out that it is only 3.6% over the stated 250W TDP, think you should be looking at that gtx680 and slamming Nvidia as their rated TDP is 195W, yet you see a 221W draw!? thats 13% higher than stated, quite a bit higher if you ask me.
AMD havent had time to refine their powerboost so only being a few % over the TDP is nothing serious and im worried that Nvidia has had awhile to ensure powerboost(or is it tune) works without outing the TDP?.
Other than that, good review and i believe the AMD side is a better buy purely on it being an all rounder in performance including GPGPU, im waiting till the 8000 series though haha.
It would be nice if we got a proper rundown of settings used though, even if its just 'Highest in-game w/ 4xMSAA'.
Edit: I get that the above isn't too different form what you do now, but some games have a named 'High' setting so it would be nice to know which exactly you used. :)
Most reviews show how much better the ATI cards are with regards to computing performance. How important is the compute performance when it comes to games? I thought that if you have no intention of running CUDA/OpenCL simulations, it should matter.
Interesting tidbit from a chap who works for AMD:
http://forum.beyond3d.com/showpost.p...postcount=3701
Quote:
Originally Posted by AMD CHAP
You're kind of answering your own question - compute isn't that relevant to games (except where they simulate physics etc.). But it's relevant to people who want to use their GPUs to accelerate graphical work (photoshop for eg.) or transcoding (handbrake) or compression (winzip) etc.
That's a great card.Althou a little overpriced.