I think your analogy is in the wrong direction. If you want to isolate how good the engine is and remove bottlenecks of fuel, chassis, tyres etc then you need to road test on a drag strip with slick tyres and nitro injection. That would be an entertaining piece (which thinking about it I would happily watch) but of only academic interest as only a handful of drag racers ever drive like that regularly. Sounds like an ideal piece for Top Gear, not for a car show
ISTR that low res benchmarks were introduced way back in the Quake era. I think including minimum frame rate or max frame latency gives a better idea of the "lumpiness" that low resolution average rates used to badly expose. I think low res benchmarks have had their time.
Yes the GPU silicon exists, but is turned off so contributes approximately zero to the TDP. The CPU should be able to use that TDP to boost more aggressively in an ideal world, so there must be a bottleneck somewhere. The bottleneck may of course wear a suit and be called "Marketing", who knows, just doesn't feel right.
It's probably down to binning and testing, tbh. Presumably there are a set of automated tests run on the silicon, to determine the appropriate bin. Let's assume that one of those is a clock speed test, and another is a GPU test. If a part fails the GPU test but passes the clock speed test, I guess you have two options: you can either simply sell that part as a CPU only at the initially tested clockspeed, or you can bin it for further testing against a variety of higher clockspeeds with the aim to sell it as a faster CPU only part. At which point, you have to ask yourself whether they'll ever be able to charge enough for a quad-core APU-based CPU to warrant the additional testing.
My guess would be that they don't think there's enough frequency headroom, regardless of TDP, to make further testing eceonomically viable: slap it out as an unlocked CPU part, price it to sell, let the enthusiasts mess with clock speed if they want it to. AMD make some money back off a faulty die, so they're happy with that.
EDIT: I guess the litmus test would be comparing the overclocking results of the Athlons compared to the APUs they're based on...
I always wanted to build a mini-itx FMx based system just for the fun of it but have never had the time. Would be really interesting to see how the Athlon genuinely compared. Maybe this christmas when i have loooadddssss of time off and i will give it a bashl... maybe.
I wonder if the FM2+ period will kill of the AMx socket and we will start seeing higher end piledriver/steamroller/excavator fitting into the same socket as the APUs? Doesnt seem to be any sign of development on from AM3+ at the moment.
A lot of the speculation on that seems to assume that will happen when AMD switch to DDR4. I guess the big question is whether they'll want to wait for mass-market adoption before making the move, or whether they'll look to push the market by being first with the new tech. It can only help their APUs to get more channels and faster memory (I'm assuming they'll put 4 DDR4 channels in to address up to 4 DIMMs) and the higher-end CPUs really need a new platform ASAP. So unifying the products on a new socket with DDR4 support would make a lot of sense, but when AMD will decide the market is ready for that is anyone's guess....
To kill off AM3+ they need to decide to abandon multi socket motherboards. Big virtual machine farms & cloud services can use the new sea of single socket platform AMD purchased, but that doesn't help something like a huge database or Exchange server that wants 32 cores in a single OS instance.
If not many people buy those machines these days, then perhaps they can walk away from the market, but I think it would make them look a tad amateur in the server market right now.
For consumer use, I think if they scrapped half the graphics and upped to 6 cores, call it a Phenom III, they could ditch AM3+ now. That is speaking as an 8 core customer that very nearly bought a 6350 because the performance isn't much different for what I do.
Well that is interesting. This says the 6800K overclocks like a nutter thing: with an average of 5GHz on air:
http://hwbot.org/hardware/processor/a10_6800k/
Whereas the 760K is a slower on water, cascade and liquid nitrogen. Oh, and no-one bothers overclocking them on air. 750K was worse but they have more than 16 submissions for that so it might be a better indicator of what Athlon can do.
http://hwbot.org/hardware/processor/athlon_x4_760k/
I dunno, I don't see why they couldn't shift all 1P systems onto FM*, while migrating multisocket servers onto DDR4-based replacements for C32 and whatever the other multisocket one is (G34?). If they shifted to a PCIe-based socket interconnect instead of HT they could still use the same silicon for GPU-less 6 and 8 core CPUs on FM* platform and on multisocket platforms (with MCMs to hit the 16+ core chips like they do know). Bundling 4 or 8 PCIe 3 lanes for interconnect shouldn't be that much of an issue, really - if they're going to push it as an enthusiast platform as well they should be looking to make 32+ PCIe lanes directly available on the CPU die....
More details about the core replacing Jaguar in 2014:
http://www.pcper.com/news/Processors...llins-Cut-TDPs
http://www.anandtech.com/show/7514/a...ma-and-mullins
More details about Mantle(thanks Bagnaj97):
http://translate.googleusercontent.c...ZvzwEd9o4SMFTw
Oxide and Mantle:
http://translate.googleusercontent.c...fxfeFMWsnB-GFQ
You might need to look at the non-translated version for the pictures.
Mantle explained!!
Also:
https://twitter.com/ryanshrout
Originally Posted by Ryan Shout,PCPEROriginally Posted by cavemanjim
This awesome news TBH.
If Mantle can reduce the CPU overhead massively,anyone with a SB Core i5 or an FX6000 series CPU and above should be fine for years in games with Mantle support.
Edit!!
AMD insists that Mantle was not intended to be limited to architecture. The base Mantle merely relatively generic functions that could be supported by other architectures while a extended Mantle level adds support for specific functions currently Radeon. Mantle and could potentially become a standard with extensions, but nothing says that interest Nvidia or AMD that management will not cause it difficult to accept conditions.
Note that the AMD forum being closed to anyone, the output of the presentation linked to Mantle, we have crossed several employees of NVIDIA, one of the main architects of the GeForce GPU. AMD's competitor appears visibly as curious as us about Mantle!
The MCM approach they use in current server parts just wires two cores together using an HT link, so if they lose HT they can't do that. I don't think PCIe is really suitable for NUMA inter CPU connection.
Perhaps they could include an HT link in the FM2+ silicon, and just leave it disabled on the consumer part. Die photos show unused PCIe lanes on the 8350, so unused port pins doesn't seem to do any harm.
There are currently 28 users browsing this thread. (0 members and 28 guests)