It looks like SR is still on track:
http://www.slideshare.net/AMD/surrou...k-play#btnNext
It looks like SR is still on track:
http://www.slideshare.net/AMD/surrou...k-play#btnNext
DanceswithUnix (15-12-2012),watercooled (15-12-2012)
Good, though I still wonder just where AMD are headed these days.
Now see those blocks marked "Decode" that convert amd64 instructions into internal uops, can they make that do ARM instructions I wonder...
Old news:
http://www.theinquirer.net/inquirer/...re-x86-version
"You'll see the ARM APU first [before an Opteron APU]. You'll see the first ARM APU at a similar time as the first ARM chip. The value proposition [of an APU] is that we can go to the memory controller to access a common memory pool, that is what we have already done between our X86 and GPU cores. ARM has joined the HSA foundation and we have some work to do to make the ARM [core] and the GPU access the same memory. [...] Once that is done we can bring an ARM APU to the market."
BTW,over on overclock.net two forum members have started a thread comparing a Core i5 3570K and a FX8350 in some games:
http://www.overclock.net/t/1333027/a...-crossfire-gpu
Both systems are more or less identical,except one has a FX8350 at 4.8GHZ and another a Core i5 3570K at 4.8GHZ with a pair of HD7970 cards in both cases.
Edit!!
It appears Llano CPUs for the desktop use TIM under the IHS:
http://forums.anandtech.com/showthread.php?t=2289709
Last edited by CAT-THE-FIFTH; 15-12-2012 at 10:51 PM.
Terbinator (16-12-2012)
Just had a quick look at the benchmarks, look really interesting! Kudos to the guys for doing it, it's relatively hard to find CPU comparisons for normal game settings.
As for Llano TIM, it's a bit disappointing to see that IMHO. Considering both Intel and AMD are now doing it, I wonder if it's actually substantially cheaper or if soldering can potentially kill a few CPUs for example?
Edit: Read that post properly now. Seems there's no appreciable difference between CPUs for the games tested, IMO it would be nice to see a 6300 in that sort of test too. Pity the thread descended into fanboy trolling at the end, and I hope it doesn't discourage similar tests in future.
My spec is left if anyone has anything similar but different CPU, it would be interesting to compare against both BD/PD and Intel systems - core performance may be lower but there are 6 full cores. I could also disable a couple of cores to test multicore scaling.
Last edited by watercooled; 16-12-2012 at 02:49 AM.
A mate is getting a FX6300 in his system soon,although I have ditched my Core i3 now. I might be able to try some benchmarks though as we both have HD5850 cards.
http://www.techpowerup.com/177435/MS...-Displays.html
Its a shame AMD does not have any nano-ITX motherboards based on their laptop SKUs.
TBH,why don't they introduce some 35W TDP desktops??
Probably quite a small market which they have traditionally served with a E450 18W boards.
There was talk a long time ago of being able to set an upper limit in the BIOS, then any CPU could become your 35W toy. I think I would prefer jumpers for that though, that way you could have say "20W, 40W, 60W, full" as settings and if the BIOS gets wiped the thing doesn't overheat.
Well, you could do something similar by limiting the max multiplier and undervolting, but that's fiddly and requires good BIOS support.
I'd like to see at least one 35W option, even if it's just a downclocked A4-5300; it'd be a great option for general office PCs and microservers - after all, they did it with the AM3 platform (25W Athlon II X2 250u), albeit in a limited, hard-to-get-hold-of form...
Undervolting, as I've said elsewhere, can potentially cause instability and is very difficult to test. Yeah, it's easy to test at full load, but at idle or in between, a lower voltage set by the offset might be too low and cause intermittent problems or potentially silent corruption. Setting a fixed voltage generally means higher idle voltage, so greater idle/low load consumption. However, it may be possibly to lock the standard idle voltage for all loads, that way you're safe with idle load, and if full load is stable (may need underclock) you're good.
I agree setting max TDP with jumpers would be a good way to do it. It would also be useful for systems using a picoPSU, so you don't have to spend extra to get a larger PSU to cope with infrequent full-load spikes, for example.
This person ran a Core i7 920,Core i7 3820,FX6200 and FX6300 at 4GHZ with a GTX670 and tested some games:
http://www.overclock.net/t/1337699/s...#post_18816492
Although his conclusions are kind of strange,considering the FX6300 is around £100 it does relatively well IMHO.
That was a strange read... Interesting concept to be testing a bunch of systems to see which to keep, clearly has too much money. Unfortunately he seems to understand comparisons badly; putting Intel's 4 cores/8 threads against AMD's 3 modules/6 cores doesn't make sense, how is that an architecture vs architecture comparison?
Despite that I agree with you CAT, it looks like the FX6300 did quite well considering its price.
An HD7970 and a GTX680 have been tested with a range of AMD and Intel CPUs:
http://uk.hardware.info/reviews/3714...d-with-10-cpus
The A6-5400K beats the Pentium G860!!
Interesting, but there are a few strange results there, for example the massive jump from i5 to i7 in BF3 multi-monitor?
The A6 also beats out the i3 in many games! Strangely, so does the Pentium, maybe Hyperthreading is having a negative impact on performance? If so, I wonder if the Windows scheduler is still to blame? It would be interesting to know what version of Windows they're using, 8 is meant to carry some improvements in that area.
It's also interesting the point they make about potential driver overhead for Nvidia/AMD graphics cards...
There are currently 40 users browsing this thread. (0 members and 40 guests)