Page 2 of 2 FirstFirst 12
Results 17 to 28 of 28

Thread: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

  1. #17
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    Quote Originally Posted by DanceswithUnix View Post
    I've not seen core estimates broken out, but you have to remember the A14 chips have a really big tensor co processor and pretty big GPU as well as all the camera DSPs etc in their transistor count that the likes of Zen 3 just doesn't have making comparison hard.

    If you want to do some back of the envelope really rough calculations, there are some numbers here you can multiply out: https://www.tomshardware.com/uk/news...ionic-revealed

    I would myself, but it's the weekend and I'm eager to finish Crysis 3. I've been playing through my backlog of never played Steam games bought on sale, and when I've finished Crysis it might be FO4 next. You wouldn't want to keep me from that
    The thing is I want to see how much of this performance is dependent on Apple jumping early onto new nodes and dumping transistors into the problem. AMD and Intel tend to be far more conservative in this regard,and usually wait for costs to be somewhat more balanced before committing.

  2. #18
    Registered+
    Join Date
    Nov 2020
    Posts
    25
    Thanks
    0
    Thanked
    1 time in 1 post

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    I still maintain that if you build the chip with energy efficiency in mind, it will be very difficult to scale up as every initial design decisions will be based on preserving power consumption. Some workloads you can throw more cores in but it's only far and between, for now

  3. #19
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    Quote Originally Posted by protagonist View Post
    I still maintain that if you build the chip with energy efficiency in mind, it will be very difficult to scale up as every initial design decisions will be based on preserving power consumption. Some workloads you can throw more cores in but it's only far and between, for now
    I can see where you are coming from, but you have to keep apart the ideas of the Instruction Set Architecture, and the implementation.

    x86 is an utter dog of an ISA, which Intel and AMD have poured huge resource into to create high performance implementations. Really, it isn't suitable for low power *or* high performance use, but the penalty at the high performance end is supposedly about 5% more transistors and a slight dip in performance that you can probably make up for by throwing some more transistors at the design. You can't throw transistors at power or cost sensitive designs, so ARM wins.

    ARM is cleaner, but so far the implementations that you come across are low power.

    I cannot think of a single aspect of the AMD64 instruction set that is better than ARM.

    RISC-V on the other hand, again only low power low frequency implementations so far, has some real big boy performance features to the ISA. A decent 31 general purpose registers (r0 always contains the handy constant zero), three operand operations, scalable register use so no mode switching if you want to do 32 bit you just run the code. For a given number of transistors I expect RISC-V has the potential to be fastest, but someone has to take the risk of building such a chip.


    Quote Originally Posted by CAT-THE-FIFTH View Post
    AMD and Intel tend to be far more conservative in this regard,and usually wait for costs to be somewhat more balanced before committing.
    Nah, Intel cores have long had a reputation for being really huge. Other cores point and taunt "who ate all the phys".
    (for non engineers, a "phy" is something that connects to the outside physical world such as a PCIe or Ethernet lane, and "phys" rhymes with "pies". Yes, Saturday has already been a long day, and I'll get me coat... )

    Logically, Intel even on 14nm could have made chips with fewer cores but made those cores larger and the single thread speed increase would give them a faster chip. If they could, but I have always maintained that despite people thinking that Intel have been sandbagging all these years with their meager IPC increases I believe it was actually the best they were capable of. They could have put more than 4 cores in though, that was just being cheap.
    Last edited by DanceswithUnix; 07-11-2020 at 01:57 PM.

  4. #20
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    Quote Originally Posted by DanceswithUnix View Post
    Nah, Intel cores have long had a reputation for being really huge. Other cores point and taunt "who ate all the phys".
    (for non engineers, a "phy" is something that connects to the outside physical world such as a PCIe or Ethernet lane, and "phys" rhymes with "pies". Yes, Saturday has already been a long day, and I'll get me coat... )

    Logically, Intel even on 14nm could have made chips with fewer cores but made those cores larger and the single thread speed increase would give them a faster chip. If they could, but I have always maintained that despite people thinking that Intel have been sandbagging all these years with their meager IPC increases I believe it was actually the best they were capable of. They could have put more than 4 cores in though, that was just being cheap.
    I am saying more in terms of node transitions. Apple seems to bank on getting onto newer nodes and throwing a ton of transistors at their SOCs. IIRC,the A14 has around 12 billion transistors,the A14X is going to be even more. The issue is if Apple is relying on nodes,what happens when they have a problem with a node being cancelled or not working out?? Will they be capable of the increases we have been seeing?? AMD and Intel have to plan more for not being on new nodes,and potentially getting more out of existing ones(which Nvidia seems to historically done better than AMD for example). This is why I say they are more conservative,they have been stung in the past by a node not working so they must be prepared to backport a newer design to an older node if required.

    I want to see how much of these improvements are dependent on their fab partners being able to deliver in time. IIRC,once they didn't and Apple had some problems with one of their earlier A series chips.

    The other issue,is once Apple starts making larger and larger SOCs,as they target higher and higher performance tiers,they will hit the same problems of yields as the chips get bigger and bigger. Current mobile SOCs are relatively small,but unless they try and do something like AMD has done,then they will start to have the issues Intel is having with its CPUs too. I have not seen much in the way of Apple talking about chiplet designs,unless I missed it!
    Last edited by CAT-THE-FIFTH; 07-11-2020 at 03:32 PM.

  5. #21
    Senior Member
    Join Date
    Aug 2006
    Posts
    2,207
    Thanks
    15
    Thanked
    114 times in 102 posts

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    Quote Originally Posted by Kumagoro View Post
    I have wondered before if it is possible to produce a pseudo x86 based cpu with some of the legacy instructions removed and still have a largely functional PC.

    If Windows and office work then that is 95% off people covered. I wonder what software out there uses instructions that are considered obsolete.

    Have Intel or AMD worked on something like that in the past?
    AMD have tried a hybrid arm/x64 chip but decided to stop working on it last time I checked.

  6. #22
    Member
    Join Date
    Nov 2018
    Posts
    113
    Thanks
    0
    Thanked
    1 time in 1 post

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    Biggest mistake ever for Apple: rewrite and debug all software for a mediocre ARM. OMG.

  7. #23
    Registered+
    Join Date
    Nov 2020
    Posts
    25
    Thanks
    0
    Thanked
    1 time in 1 post

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    If there's a company that can pull it off it would be Apple..

  8. #24
    Registered User
    Join Date
    Nov 2020
    Posts
    1
    Thanks
    0
    Thanked
    0 times in 0 posts

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    What's going to be the main difference between Apple's CPU and the Intel one they've used before?

  9. #25
    Senior Member
    Join Date
    Jul 2009
    Location
    West Sussex
    Posts
    1,721
    Thanks
    197
    Thanked
    243 times in 223 posts
    • kompukare's system
      • Motherboard:
      • Asus P8Z77-V LX
      • CPU:
      • Intel i5-3570K
      • Memory:
      • 4 x 8GB DDR3
      • Storage:
      • Samsung 850 EVo 500GB | Corsair MP510 960GB | 2 x WD 4TB spinners
      • Graphics card(s):
      • Sappihre R7 260X 1GB (sic)
      • PSU:
      • Antec 650 Gold TruePower (Seasonic)
      • Case:
      • Aerocool DS 200 (silenced, 53.6 litres)l)
      • Operating System:
      • Windows 10-64
      • Monitor(s):
      • 2 x ViewSonic 27" 1440p

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    Quote Originally Posted by John_Amstrad View Post
    Biggest mistake ever for Apple: rewrite and debug all software for a mediocre ARM. OMG.
    Surely, when the went from PPC to x86 the same comment could easily have been made?
    Went from a modern ISA with fairly little baggage to an ancient ISA still able to run the real horrors of early PCs: 8086 real mode, 64KB memory segment, dire lack of registers, etc.
    Not that an ISA is as important as it was with the current transistor budgets.
    As for high-performance ARM: well that's a lot more to it than the core and some of the things which high performance needs in terms of IO, memory bandwidth and so on will take up a fair bit of power but I think scaling up is far easier than scaling down (just ask Intel: $billions wasted on Atom once they saw a thread and nothing much to show for it).

  10. #26
    Registered+
    Join Date
    Nov 2020
    Posts
    25
    Thanks
    0
    Thanked
    1 time in 1 post

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    Quote Originally Posted by kompukare View Post
    Surely, when the went from PPC to x86 the same comment could easily have been made?
    Went from a modern ISA with fairly little baggage to an ancient ISA still able to run the real horrors of early PCs: 8086 real mode, 64KB memory segment, dire lack of registers, etc.
    Not that an ISA is as important as it was with the current transistor budgets.
    As for high-performance ARM: well that's a lot more to it than the core and some of the things which high performance needs in terms of IO, memory bandwidth and so on will take up a fair bit of power but I think scaling up is far easier than scaling down (just ask Intel: $billions wasted on Atom once they saw a thread and nothing much to show for it).
    Atom was relatively competitive but just had no adopters. Then Intel just gave up.

  11. #27
    Senior Member
    Join Date
    Jul 2009
    Location
    West Sussex
    Posts
    1,721
    Thanks
    197
    Thanked
    243 times in 223 posts
    • kompukare's system
      • Motherboard:
      • Asus P8Z77-V LX
      • CPU:
      • Intel i5-3570K
      • Memory:
      • 4 x 8GB DDR3
      • Storage:
      • Samsung 850 EVo 500GB | Corsair MP510 960GB | 2 x WD 4TB spinners
      • Graphics card(s):
      • Sappihre R7 260X 1GB (sic)
      • PSU:
      • Antec 650 Gold TruePower (Seasonic)
      • Case:
      • Aerocool DS 200 (silenced, 53.6 litres)l)
      • Operating System:
      • Windows 10-64
      • Monitor(s):
      • 2 x ViewSonic 27" 1440p

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    Quote Originally Posted by protagonist View Post
    Atom was relatively competitive but just had no adopters. Then Intel just gave up.
    Yes, but, market segmentation can be very costly.
    Atom was only ever allowed to be relatively competitive as long as it didn't compete with the Core cash cow in the same was Core is not supposed to compete with even higher margin Xeon.
    That is, the ARM's business is high volume, lower margins. In the end while Intel could always have had trouble with their fabs and node progression, the high volume, lower margin business they said no to has meant that TSMC had enough money to invest in the future.
    No guarantees of course, but Intel's misstep with fabs and process would not have mattered that much if TSMC wasn't ultra competitive.

  12. #28
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Apple A14X benchmarks leak ahead of Apple Silicon Mac event

    Quote Originally Posted by protagonist View Post
    Atom was relatively competitive but just had no adopters. Then Intel just gave up.
    Atom was not close to competitive in features or power consumption. But price, well as long as they were giving the things away they were in loads of products. Then Intel started charging money for them, about the time those of us with Atom devices (I have a couple of tablets here) were getting sick of the lousy battery life and constant thermal throttling from the high temperatures.

Page 2 of 2 FirstFirst 12

Thread Information

Users Browsing this Thread

There are currently 2 users browsing this thread. (0 members and 2 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •