Read more.Quote:
Single and multi-core GB5 tests outpace the 16-inch MacBook Pro with Intel Core-i9 CPU.
Printable View
Read more.Quote:
Single and multi-core GB5 tests outpace the 16-inch MacBook Pro with Intel Core-i9 CPU.
Meh,more useless benchmark suites. Apple also is know for profiling certain applications to tune performance in,so it won't be repesentative of all software IMHO. Also the fact Apple is basically thowing transistors at their SOCs. How much of the improvement is because Apple is on 5NM and Intel is still stuck on 14NM??
Meh, all that tells me is that the a14x has more cores than the a14, I'd say 10 versus the 6 in the a14.
I've said it before geekbench is only really good for comparing like for like but I'm sure Apple will wheel out some slides with huge gains in carefully selected situations.
While I'm sure the transition will be smoother on os-x than windows (os architectural differences and 'complete control over hardware' help) I still wouldn't count on arm being faster with 'professional' programs.
Or maybe they swapped out the 4 energy efficient ".little" cores for 4 more big ones. A sort of "big.big" variant :D
Edit: I've thought Apple CPUs were overly wide issue for just mobile use for years, I presumed leading up to this moment. They probably had a bunch of knobs they could tune for this die lined up, so it could actually be quite quick. It won't get me buying into the Apple ecosystem, but I have to be impressed with their CPU design skills so far.
ofc the benchmarks are probably going to be "at 10W power level" or similar, a game Intel have been known to play when it was at their advantage.
Macs are great computers for doing very little. As soon as you start pushing, they show they were not made to work hard. Changing the silicon wont make any difference to most users, as current Mac owners do little more than browse and read emails.
Anyone that still does more than text on a Mac really should reconsider their life choices.
I can't wait to see comments elsewhere from people saying that the Apple chips are faster than i9s... Of course the numbers are still interesting to see.
Ah yes, a geekbench benchmark versus a thermally throttling i9, truly indicative of how far they've come!
Great... an ecosystem locked device is somewhat better than a gimped, overheating, under performing chip in a synthetic benchmark. I'm going to go back to windows and AMD, now.
Geekbench really is just nonsense across platforms, isn't it?
Just makes you wonder how efficient these huge ARM cores will be. Intel struggled with atom to get a low powered competitive SOC going for the handheld market. I can imagine Apple will have the same issue upscaling their design to fit the need for high end. Might be fine for a reg MacBook.
Excellent performance, but in a super specific test. My concern as a Software Dev, is its compatibility with various compilers, development environments, tools, etc... Something tells me I will have to stick with the Intel variant of Macbook Pro.
TLDR: x86 is a mess that doesn't scale down; ARM is comparatively clean and has no theoretical barrier to scaling up.
To be fair the later Atom cores are a fair bit better (as were the equivalent AMD Cat cores), the ones after Intel gave up on phones and tablets and generally labelled "Celeron". But that is partly because they aren't so scaled down as the early Atom chips. AMD64 is a really bad instruction set, which also drags the even worse 386 and the truly dire 286 modes behind it.
The dreadful variable length CISC instruction set and ancient baggage is what stops x86 from scaling down. Even if you can manage with the energy requirement to decode those instructions, it takes more silicon and the low end market is *really* price sensitive and won't tolerate an extra square mm.
ARM V8 is OTOH a more recent instruction set. You can't run all code from the original ARM chips, they don't even try. Throwing out the old baggage allows a cleaner approach aided by the fact that the starting point of ARM V1 is a heck of a lot better than starting from the 8086.
We haven't seen a *really* fast ARM implementation yet, but it is very possible and seems to be now very much on the cards. ARM have just been talking about the A78C where they take the usual 4+4 big.LITTLE setup and make it 8 core big.big with maxed out L3 cache. Amazon have their 64 core Graviton server chips, and as that chip is now a year old I presume they will have a Graviton 3 with a more recent N2 core. For desktop/laptop style use we now have the V1 cores.
https://www.anandtech.com/show/16073...neoverse-v1-n2
I look forward to my ARM V1 based Raspberry Pi :D
I've not seen core estimates broken out, but you have to remember the A14 chips have a really big tensor co processor and pretty big GPU as well as all the camera DSPs etc in their transistor count that the likes of Zen 3 just doesn't have making comparison hard.
If you want to do some back of the envelope really rough calculations, there are some numbers here you can multiply out: https://www.tomshardware.com/uk/news...ionic-revealed
I would myself, but it's the weekend and I'm eager to finish Crysis 3. I've been playing through my backlog of never played Steam games bought on sale, and when I've finished Crysis it might be FO4 next. You wouldn't want to keep me from that ;) :D
I have wondered before if it is possible to produce a pseudo x86 based cpu with some of the legacy instructions removed and still have a largely functional PC.
If Windows and office work then that is 95% off people covered. I wonder what software out there uses instructions that are considered obsolete.
Have Intel or AMD worked on something like that in the past?
I've heard unsubstantiated rumours, but nothing I believe and not for a long time (like a 386 that couldn't do 8086 mode).
The thing is, it is the variable instruction length and available addressing modes that hurt you. The very CISC essence of x86/AMD64, not just the baggage.
Any attempts to run old Windows software on modern hardware always fails for me, so I don't think it would be any practical loss, just not a real win either.
VIA did do a RISC mode in their CPUs, some 7 byte fixed length thing that gave better access to the underlying uOPS for testing purposes.
The thing is I want to see how much of this performance is dependent on Apple jumping early onto new nodes and dumping transistors into the problem. AMD and Intel tend to be far more conservative in this regard,and usually wait for costs to be somewhat more balanced before committing.
I still maintain that if you build the chip with energy efficiency in mind, it will be very difficult to scale up as every initial design decisions will be based on preserving power consumption. Some workloads you can throw more cores in but it's only far and between, for now
I can see where you are coming from, but you have to keep apart the ideas of the Instruction Set Architecture, and the implementation.
x86 is an utter dog of an ISA, which Intel and AMD have poured huge resource into to create high performance implementations. Really, it isn't suitable for low power *or* high performance use, but the penalty at the high performance end is supposedly about 5% more transistors and a slight dip in performance that you can probably make up for by throwing some more transistors at the design. You can't throw transistors at power or cost sensitive designs, so ARM wins.
ARM is cleaner, but so far the implementations that you come across are low power.
I cannot think of a single aspect of the AMD64 instruction set that is better than ARM.
RISC-V on the other hand, again only low power low frequency implementations so far, has some real big boy performance features to the ISA. A decent 31 general purpose registers (r0 always contains the handy constant zero), three operand operations, scalable register use so no mode switching if you want to do 32 bit you just run the code. For a given number of transistors I expect RISC-V has the potential to be fastest, but someone has to take the risk of building such a chip.
Nah, Intel cores have long had a reputation for being really huge. Other cores point and taunt "who ate all the phys".
(for non engineers, a "phy" is something that connects to the outside physical world such as a PCIe or Ethernet lane, and "phys" rhymes with "pies". Yes, Saturday has already been a long day, and I'll get me coat... :D )
Logically, Intel even on 14nm could have made chips with fewer cores but made those cores larger and the single thread speed increase would give them a faster chip. If they could, but I have always maintained that despite people thinking that Intel have been sandbagging all these years with their meager IPC increases I believe it was actually the best they were capable of. They could have put more than 4 cores in though, that was just being cheap.
I am saying more in terms of node transitions. Apple seems to bank on getting onto newer nodes and throwing a ton of transistors at their SOCs. IIRC,the A14 has around 12 billion transistors,the A14X is going to be even more. The issue is if Apple is relying on nodes,what happens when they have a problem with a node being cancelled or not working out?? Will they be capable of the increases we have been seeing?? AMD and Intel have to plan more for not being on new nodes,and potentially getting more out of existing ones(which Nvidia seems to historically done better than AMD for example). This is why I say they are more conservative,they have been stung in the past by a node not working so they must be prepared to backport a newer design to an older node if required.
I want to see how much of these improvements are dependent on their fab partners being able to deliver in time. IIRC,once they didn't and Apple had some problems with one of their earlier A series chips.
The other issue,is once Apple starts making larger and larger SOCs,as they target higher and higher performance tiers,they will hit the same problems of yields as the chips get bigger and bigger. Current mobile SOCs are relatively small,but unless they try and do something like AMD has done,then they will start to have the issues Intel is having with its CPUs too. I have not seen much in the way of Apple talking about chiplet designs,unless I missed it!
Biggest mistake ever for Apple: rewrite and debug all software for a mediocre ARM. OMG.
If there's a company that can pull it off it would be Apple..
What's going to be the main difference between Apple's CPU and the Intel one they've used before?
Surely, when the went from PPC to x86 the same comment could easily have been made?
Went from a modern ISA with fairly little baggage to an ancient ISA still able to run the real horrors of early PCs: 8086 real mode, 64KB memory segment, dire lack of registers, etc.
Not that an ISA is as important as it was with the current transistor budgets.
As for high-performance ARM: well that's a lot more to it than the core and some of the things which high performance needs in terms of IO, memory bandwidth and so on will take up a fair bit of power but I think scaling up is far easier than scaling down (just ask Intel: $billions wasted on Atom once they saw a thread and nothing much to show for it).
Yes, but, market segmentation can be very costly.
Atom was only ever allowed to be relatively competitive as long as it didn't compete with the Core cash cow in the same was Core is not supposed to compete with even higher margin Xeon.
That is, the ARM's business is high volume, lower margins. In the end while Intel could always have had trouble with their fabs and node progression, the high volume, lower margin business they said no to has meant that TSMC had enough money to invest in the future.
No guarantees of course, but Intel's misstep with fabs and process would not have mattered that much if TSMC wasn't ultra competitive.
Atom was not close to competitive in features or power consumption. But price, well as long as they were giving the things away they were in loads of products. Then Intel started charging money for them, about the time those of us with Atom devices (I have a couple of tablets here) were getting sick of the lousy battery life and constant thermal throttling from the high temperatures.