TBH it's not really the CPU that's the issue for idle power efficiency any more as the cores can be gated. It's more the power supplies and motherboards that have room to improve.
TBH it's not really the CPU that's the issue for idle power efficiency any more as the cores can be gated. It's more the power supplies and motherboards that have room to improve.
Hmm... that's certainly true although I like the coprocessor idea more for the coding side of things. It would mean that you could load your main CPU like you load a GPU in GPGPU work. Ok... I'll admit it... my code has prioritisation issues (to the point where it has been known to hang Windows and crash the NT Kernel - Linux has proven a little more resilient though), but the GPU copes with it if it isn't trying to run anything in the background (running my screen off integrated).
Back on topic: I really hope that AMD's newer architectures put them back near the top of the performance segment without being mini fusion reactors. Personally I also don't want to see an insane number of cores (for the number crunching I do, the fact that multithreading is a thing just serves to remind me that there is a deity and they are most definitely vengeful - it's a damn linear algorithm...)
Developing an ARM core from the ground up surprises me - they don't have much/any RISC experience IIRC and yet they must be intending to go head to head with the RISC masters - well competition is always good. It'll be interesting to see what issue width they choose to go with considering Bulldozer was considered too wide but Apple's A7 is significantly wider IIRC.
Nobody makes CISC chips any more. Intel and AMD make RISC chips with an AMD64 translator on the front end. More to the point they make big enterprise grade CPUs, and none of the other ARM licensees do that (nearest would be Nvidia but they seem to be struggling with Denver).
Edit to add: It was a while ago now, but the AMD 29K was supposed to be rather a nice RISC cpu. Only ever saw them in laser printers though. Interestingly the K5 had a lot of elements of the 29K in it, so one thing that AMD do have experience of is chopping one CPU architecture into another. That might allow them to share lots of work between an AMD64 chip and an ARM chip if they design them with that in mind from day one.
Last edited by DanceswithUnix; 06-05-2014 at 07:44 AM.
That's very interesting I thought that in the early 2000s it was decided that although RISC's operations were faster to execute, more were needed and it was an overall net loss compared to CISC. I did a cursory google but didn't see an obvious answer: What could possibly be a logical reason to continue using a translator? No matter how good it is it has to slow it down. Is it just for legacy (Windoze) and should we instead be hoping for processors to go over to something like a sparc64 set at some point (which IIRC has the best transistor to performance ratio, so, theoretically the highest power efficiency)?
I shall have to investigate more... but after exams.
Cheers!
but wasn't sparc also RISC?
In a nutshell, the world moved on from the debate. Intel used politics and economic clout to silence the best of the RISC opposition, but also the CISC processors became simpler by moving the big clumsy instructions into emulated microcode. As more transistors became available the remaining RISC and Intel CPUs used much the same tricks to improve performance, so the gap narrowed.
Intel say they x86 conversion takes something like 5% of the CPU. But then they use a very big die.
China is heavily invested in MIPS, and most portable devices in the west seem to be Android ARM based. RISC and CISC are both alive and kicking, the economics is more important than the technology.
Mobile Kaveri SKUs leaked:
http://wccftech.com/amd-mobile-kaver...series-leaked/
Edit!!
Adaptive Clocking in AMD’s Steamroller:
http://www.realworldtech.com/steamroller-clocking/
Last edited by CAT-THE-FIFTH; 07-05-2014 at 09:35 AM.
AETAaAS (07-05-2014)
Slightly confusing change in nomenclature with the FX series APUs. Far as I know, up to this point, FXs were CPUs and As were APUs. Is this accurate? And was there a reason for the change?
I'm a little confused with AMD's architectures ATM, between the desktop, mobile and the different series, I don't even know what they have in production anymore. But it's nice to see lateral thinking on AMD's part with the adaptive clocking.
The original use of FX was for the top binned Athlon silicon that could clock that bit faster but you paid quite a premium for them.
The current FX line is, frankly, stupid. There is nothing special about them, they are just AM3+ processors.
So this looks like a return to form in a way. You can get the standard 35W apu, or you can get the faster FX version.
AETAaAS (07-05-2014)
AMD demonstrates the ARM A57 based Opteron A1100:
http://techreport.com/news/26419/amd...ed-server-chip
Kaveri ULV:
http://wccftech.com/amd-kaveri-mobil...ll-benchmarks/
It seems faster than many of the Haswell ULV CPUs!
Hmmmm, I am suspcious of those charts. The ULV i7 isn't included in the PCMark 8 chart - wonder why not...
The i5 4200U starts turning up in laptops at around the £450 mark, so AMD *must* get the A10 7300 in designs at that price. Even then they're working at a ~ 25% higher power budget (19W v. Intel's 15W). And I notice the FX-7500 isn't much faster than the A10 7300, so they can't afford to charge much of a premium for the FX part either - it looks like in general compute workloads it's going to be nearer the i5 than the i7.
For me the most interesting thing is the comparison to Trinity/Richland: I've got the flagship Trinity (A10 4600m) which is, what, 2.3Ghz - 3.2Ghz plus 384 VLIW4 shaders @ 497-686MHz - the ULV Kaveri's are approaching those clock speeds. With the IPC improvements I suspect that the A10 7300 will match or beat my laptop at pretty much any task in just over half the TDP - pretty impressive. But if they don't get into £400 laptops, it'll be a bit academic anyway: and it's hard to find an A10 laptop for £400 any more (I got a pretty good deal on mine ).
Supposedly the next generation AMD graphics cards will be launching this year and will use HBM:
http://videocardz.com/50472/amd-laun...cs-card-summer
They are being made at GF!!
Edit!!
Jaguar based pico-ITX motherboard released:
http://www.techpowerup.com/200720/ax...o-itx-sbc.html
Last edited by CAT-THE-FIFTH; 12-05-2014 at 05:15 PM.
Noxvayl (12-05-2014)
There are currently 1 users browsing this thread. (0 members and 1 guests)