AMD laid off a lot of its driver team during the recent layoffs IIRC- what a pile of fail decision that was!
Printable View
Only Rumours
AMD is to show a soft release of the HD 8000 by the end of the year and its target performance (was?) is 1.3x HD 7000
Nvidia was surprised at the performance of the 28nm chips, GK104 was meant to be a 660 & 660Ti mid-ranged card, while the GK100 was the 670 & 680.
The way things have panned out I wouldn't be surprised to see the GK100 deferred until AMD release competition, but with Nvidia already at a 1.2x advantage it looks like AMD are in for a rocky road. It wouldn't surprise me to see AMD ignore the high end market and concentrate on the mass market (low-to-mid end) ranges between £100 - £200.
Nvidia wont enter this arena for at least 6 months as they need to recoup the R&D of kepler. Even so, who would put money down on AMD at this point in time, knowing that performance or price are not stable.
Out of interest, why are TPU's results for the 680 16FPS faster than yours in Batman? They're using 12.3 yet still manage to net 77FPS for the 7970. Test systems are slightly different in terms of CPU, but that shouldn't splurge out such a difference. For some reason they're tests are done without AA as well. :shocked2:
http://www.techpowerup.com/reviews/A...t_Cu_II/8.html
http://tpucdn.com/reviews/ASUS/GeFor..._1920_1200.gif
Edit: AT's results for Batman (X79 test bed) have the 680, 670, and 7970 within 3FPS of each other between 93-96FPS.
http://images.anandtech.com/graphs/graph5818/46428.png
Am i missing something?
i think the problem is that AMD are trying to make the best GPGPU, this is a good idea and it was something Nvidia were first in terms of making a push in the discrete market, Fermi was that computation power house and it was poor compared to AMD as amd went for stronger gaming performance, now AMD have swapped positions with Nvidia... Next gen will be a win for AMD i reckon, GPGPU is here to stay and remember AMD are in the mobile device market (yes so are Nvidia but not so much...) so having a strong GPGPU means their all in one approach for laptops etc will greatly improve.
Its hard to tell whats upcoming but currently its as simple as Nvidia having a gpu which does a specific job - gaming and AMD does the jack of all trades. AMD can still cut the cost of their gpu so they will stay competitive but by how much is what we shall find out :).
Terbinator,
We're using Catalyst 12.4. The difference between these drivers and, say, 12.2 are huge in Batman: the scores went up by a significant degree. Our results are within 10 per cent of one another for the three mentioned cards. We use 8x MSAA, which, given the HD 7900-series' memory bandwidth, plays better for them.
Also, look at AT's HD 7970 and HD 7950 results. Do you really think the HD 7970 is almost 30 per cent faster than an HD 7950? That seems way too much, IMHO.
Are they all benchmarking the same thing? Scenes, demos etc etc
I'll add another for you :)
http://hothardware.com/articleimages...845/batman.png
Benchmarks have been all over the place since Kepler was released tbh.
Check out BF3 on 4 different sites -
http://images.anandtech.com/graphs/graph5818/46436.png
680 is 28% faster than the 7970, 670 is 19% faster on AT.
http://tpucdn.com/reviews/NVIDIA/GeF..._2560_1600.gif
680 is 5% faster than the 7970, 670 is 4% slower on TPU.
http://img.hexus.net/v2/graphics_car...670GTE/BF3.png
680 is 17% faster than the 7970, 670 is 8% faster here.
http://media.bestofmicro.com/T/G/336...eld%202560.png
680 is 5% faster than the 7970, 670 is 2% slower on THG.
That's some wildly different results for the same game.
Some people run a Core i7 990X at 4.8GHz as the base system, others a Core i5 2500K. There are way too many variables - AA, AF, map, length of run, number of runs, quality settings, etc. - to contend with when trying to draw an accurate consensus between different websites. All you can do, I suppose, is look at one site and see which disclosed settings most closely match what you'd play with.
So the card still has a lot of power as expected but for me it's still 100 quid more than I am willing to sell out. Thank god my 5850 still has the power to run most games, might have to reduce FSAA and detail a bit but worth it.
Very much this, basically what I've said since release. Like you say, all you have to do is look at the 480 to see how many problems a massive architecture change introduces. The self-overclocking thing isn't playing fair in benchmarks though TBH, as said AMD could quite easily do the same given the overclock headroom they all seem to have.
Also, as Scainer says, I fully expect the 780 to compete with the HD8000 series, calling the current card the 680 pretty much sealed the deal there - it wouldn't exactly be commercially sensible to release the 685 immediately after the 680, certainly not given yields of the much smaller 680 die...
It seems another major French review site also mentioned they get 1084MHZ too:
http://translate.googleusercontent.c...gSGi48cCBy8kCg
They also mention at the beginning that other colleagues have mentioned boosts of even 1123MHZ. Anandtech also mentions 1084MHZ too.
So,it seems the review samples are hitting 1084MHZ at least. It seems either review samples are hitting higher clockspeeds or the 980MHZ figure Nvidia is quoting for retail cards is inaccurate.
Bit-tech also noted "aggressive use of GPU Boost, as we saw our review sample reach core frequencies of up between 1,045Mhz and 1,084MHz during testing"
rather than the 980MHz that the standard card is meant to have.