Page 1 of 2 12 LastLast
Results 1 to 16 of 20

Thread: Gaming performance of Apple M1 Pro, M1 Max investigated

  1. #1
    HEXUS.admin
    Join Date
    Apr 2005
    Posts
    31,709
    Thanks
    0
    Thanked
    2,073 times in 719 posts

    Gaming performance of Apple M1 Pro, M1 Max investigated

    After Apple's blockbuster reveal last week, the GPUs don't live up to the hype.
    Read more.

  2. #2
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Apple and hype - NEVER!!

  3. Received thanks from:


  4. #3
    Senior Member
    Join Date
    May 2014
    Posts
    2,385
    Thanks
    181
    Thanked
    304 times in 221 posts

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    I mean, pushing 85 FPS on quite a demanding title at the highest settings with TAA on effectively, an APU is no mean feat. I'd say that's actually quite a success considering every other configuration on that list is a dGPU. Obviously there needs to be a substantial amount of optimisation done but colour me impressed that they have an iGPU that competes with dGPUs quite successfully considering nearly no one has looked at optimising for Mac OS for gaming in a very long time let alone for the M1 processors GPU architecture.
    Last edited by Tabbykatze; 26-10-2021 at 10:25 AM.

  5. #4
    Senior Member
    Join Date
    Jul 2003
    Posts
    12,185
    Thanks
    911
    Thanked
    599 times in 420 posts

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Pretty sure I said that the performance increase they were shouting about would be in a test done with a specially optimised version of something...

  6. #5
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Quote Originally Posted by Tabbykatze View Post
    I mean, pushing 85 FPS on quite a demanding title at the highest settings with TAA on effectively, an APU is no mean feat. I'd say that's actually quite a success considering every other configuration on that list is a dGPU. Obviously there needs to be a substantial amount of optimisation done but colour me impressed that they have an iGPU that competes with dGPUs quite successfully considering nearly no one has looked at optimising for Mac OS for gaming in a very long time let along for the M1 processors GPU architecture.
    The M1 Max is around 425MM2 on cutting edge second generation TSMC 5NM N5P,and has double the transistors of a GA102 used in an RTX3090. It also has something like 400GB/S of memory bandwidth using very expensive soldered LPDDR5. Its significantly bigger than even the XBox Series X SOC which is well under 400MM2 on TSMC 7NM which is close to an RTX3060TI/RX6700XT in performance.All the Nvidia mobile dGPUs are on Samsung 8NM which not only is less denser than TSMC 7NM but also probably isn't as efficient.

    It gets even worse,when it can barely match a mobile RTX3060(looking at other tests) too in the best case scenario. Now consider how an RX6600XT competes with the RTX3060 desktop model,and how a mobile RX6600 based dGPU is probably going to beat a mobile RTX3060.So a 235MM2 Navi 23 dGPU with only 224 GB/S of GDDR5 bandwidth using a 128 bit GDDR5 memory bus,and an 8C Zen3 SOC(which also has an iGP),is probably going to be as good as what Apple is producing on a process node which is at least one generation behind,using less die area and transistors too(and cheaper to make) with less memory bandwidth using cheaper memory.

    Its going to be like the Intel dGPUs,which are using a sledgehammer approach on better nodes to try and defeat the opposition. But its quite clear AMD and even Nvidia are doing more with less transistors.

    Edit!!

    Another issue is the API Apple uses for its GPUs,which is very compute focussed,ie,Metal. But instead of replacing Metal with Vulkan they stuck with it. That is probably the biggest issue - if they had at least adopted Vulkan,it would make it far easier to code games to use the GPU in the M1. So you have the big issue of devs,having to not only code with DX12,Vulkan and whatever Sony uses in mind,but also Metal? Metal is worse than Vulkan for gaming workloads and Apple won't support the latter.
    Last edited by CAT-THE-FIFTH; 26-10-2021 at 10:53 AM.

  7. #6
    Senior Member
    Join Date
    May 2008
    Location
    London town
    Posts
    427
    Thanks
    8
    Thanked
    21 times in 16 posts

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    As ever they kill the benchmarks, but all a bit meh IRL. Nice machines, true, but...

  8. #7
    Registered+
    Join Date
    Oct 2018
    Posts
    63
    Thanks
    0
    Thanked
    2 times in 2 posts

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Given 6800m in above double performance of 5600M stated, and all the other cravats that would make comparing apples to oranges problematic make this article to me..null and void.

  9. #8
    Senior Member
    Join Date
    May 2014
    Posts
    2,385
    Thanks
    181
    Thanked
    304 times in 221 posts

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Quote Originally Posted by CAT-THE-FIFTH View Post
    The M1 Max is around 425MM2 on cutting edge second generation TSMC 5NM N5P,and has double the transistors of a GA102 used in an RTX3090. It also has something like 400GB/S of memory bandwidth using very expensive soldered LPDDR5. Its significantly bigger than even the XBox Series X SOC which is well under 400MM2 on TSMC 7NM and probably is close to an RTX3060TI/RX6700 in performance.All the Nvidia mobile dGPUs are on Samsung 8NM which not only is less denser than TSMC 7NM but also probably isn't as efficient.

    It gets even worse,when it can barely match a mobile RTX3060(looking at other tests) too in the best case scenario. Now consider how an RX6600 competes with the desktop model,and how a mobile RX6600 based dGPU is probably going to beat a mobile RTX3060.So a 235MM2 Navi 23 dGPU with only 224 GB/S GDDR5 using a 128 bit GDDR5 memory bus,and an 8C Zen3 SOC(which also has an iGP),is probably going to be as good as what Apple is producing on a process node,which is at least one generation behind,using less die area too(and cheaper to make) with less memory bandwidth using cheaper memory.

    Its going to be like the Intel dGPUs,which are using a sledgehammer approach on better nodes to try and defeat the opposition.
    Except this is a fully fledged SoC with x86 emulation, accelerators for non immediate CPU use (like AVX on intel etc) and has had nearly naff all development into getting a proper software > hardware layer in place for developers of games. I highly doubt Apple has optimised their drivers for something like gaming or at least have done the bare minimum necessary. How much of that 425mm^2 is GPU die area, the GA101 is not a fully fledged SoC at 628mm^2

    With my rudimentary mathematics and cutting an image up in paint, the GPU looks to be occupying about 24% of the total die space giving total GPU size of 103MM^2. I mean, discounting the LPDDR5X, NPU engine, E+P cor space and all the other stuff along with it, that's a pretty small GPU throwing out 85 FPS.

    I think you're being unduly unfair without looking at it as a whole piece and a sum of its parts than just trying it as an apples to apples comparison.

    Edit: As you pointed out, they're using a sub-optimal API layer (Metal) instead of Vulkan for gaming, who knows how far their gaming performance could have gone using a far more effective and well coded for API layer.

    Second Edit: Also, considering the 1.8x 7nm > 5nm transistor density increase, this means the GPU cores alone would be around 186mm^2 on 7nm. So while using a less efficient API, discounting memory silicon, the 6600XT gets 111FPS with 1% lows down to 85FPS. I mean, those numbers are still very respectable...
    Last edited by Tabbykatze; 26-10-2021 at 11:14 AM.

  10. #9
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Quote Originally Posted by Tabbykatze View Post
    Except this is a fully fledged SoC with x86 emulation, accelerators for non immediate CPU use (like AVX on intel etc) and has had nearly naff all development into getting a proper software > hardware layer in place for developers of games. I highly doubt Apple has optimised their drivers for something like gaming or at least have done the bare minimum necessary. How much of that 425mm^2 is GPU die area, the GA101 is not a fully fledged SoC at 628mm^2

    With my rudimentary mathematics and cutting an image up in paint, the GPU looks to be occupying about 24% of the total die space giving total GPU size of 103MM^2. I mean, discounting the LPDDR5X, NPU engine, E+P cor space and all the other stuff along with it, that's a pretty small GPU throwing out 85 FPS.

    I think you're being unduly unfair without looking at it as a whole piece and a sum of its parts than just trying it as an apples to apples comparison.

    Edit: As you pointed out, they're using a sub-optimal API layer (Metal) instead of Vulkan for gaming, who knows how far their gaming performance could have gone using a far more effective and well coded for API layer.
    I am looking based on process nodes used and transistors used. AMD,etc are doing more with less - Apple is throwing transistors at the problem and its no different than what Intel Arc is doing. The big issue here is what happens if TSMC has a slip up and Apple is stuck on a node for 2 or 3 generations? AMD and Nvidia have had to engineer ways around this.

    Apple is incredibly reliant on TSMC delivering.

    A far amount of dGPUs is also not shaders too. Look at Navi 22 as an example.



    The Apple block diagrams,just circle the physical shaders - on Navi22 just about 30% of the die is actual shaders. Now consider that TSMC 7NM is much less denser than second generation TSMC 5NM N5P,that Apple dGPU in a 425MM2 SOC,has far more transistors than the 335MM2 7NM Navi 22 die!! Those RX6800M laptops use Navi22.

    People like AMD,have to have decent backward compatibility on both dGPU and CPU,so have to use transistors for that too,and emulation(and virtualisation) still needs to be done on AMD/Intel cores too.

    So all I am seeing is:
    1.)Apple needing to be on best node possible which costs a lot
    2.)Apple having to use a massive SOC on a newish node,which means massive costs and poor yields
    3.)Apple having to use the latest low power(and expensive LPDDR5) using a ton of channels to compete
    4.)Apple needing to package the RAM close to the die(just like the AMD HBM solution),which probably increases complexity
    5.)Having to use fixed function hardware,and GPU compute to beat the competitors, hence having to spend a lot on specific software optimisations to do that
    6.)Also meaning if the software is not supported,or you use legacy applications performance will be patchy

    So its great if you run within the range of software Apple is interested in,which has been always the case with Macs going back decades. Even the G3,G4 and G5 Mac could do quite well in the areas they were built for. I know because I have had to work in plenty of mixed Mac/PC environments before.Even the new M1 MacBooks look nice products overall.

    But the issue,is that Apple needs to move to chiplets at some point.

    AMD/Intel have progressed further along the chiplet/heterogenuous node manufacturing route(especially as they are far more experienced in packaging) because the reality is relying on new nodes(and chucking tons of transistors at the problem),is going to become harder and harder as the shrinks get harder too.

    Its why GPU chiplets are going to be a thing soon,and even why AMD went that way with their CPUs. Its about decreasing chip size,and making bits which are less node dependent on cheaper and plentiful nodes. But you also need to invest in decent I/O fabric. A Ryzen 9 5950X,for example, is made of two 80MM2 7NM chiplets,and a 125MM2 I/O die on an ancient 12NM/14NM node,using cheap DDR4. AMD has also proven,by using 3D packaging they can get notable performance and effiency improvements,via simple stacking more chiplets on top(which is cheaper than making even bigger chiplets).

    It was why Intel Lakefield,was more notable for how it was made(than the final product),and stuff like delinking production from process nodes,is increasingly going to be important. So is putting R and D into lowish power connectivity(the I/O fabric) between the various parts. AMD and companies like Fujitsu have put a lot of effort into power reduction in the latter area. We have to see how Intel and Nvidia can do this as they are moving to chiplets too.

    If you look at both Zen3 and RDNA2,AMD managed to get decent gains on the same node(yes,I don't like the prices, but I can't deny I am impressed by how AMD managed to get big efficiency and per transistor performance improvements from the same node).

    Maybe we need to agree to disagree!
    Last edited by CAT-THE-FIFTH; 26-10-2021 at 11:55 AM.

  11. #10
    Senior Member
    Join Date
    Mar 2019
    Location
    Northants
    Posts
    309
    Thanks
    4
    Thanked
    22 times in 19 posts
    • KultiVator's system
      • Motherboard:
      • Aorus x570 Ultra
      • CPU:
      • Ryzen 3900x
      • Memory:
      • G.Skill 32GB (2x16gb) 3600Mhz
      • Storage:
      • 5TB of NVMe storage (most of it on PCIe4) + Various SATA SSDs & HDDs
      • Graphics card(s):
      • Aorus RTX 2080 Super OC 8GB
      • PSU:
      • Corsair RM750x
      • Case:
      • Phanteks Eclipse P600s (Black & White Edition)
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • AOC 32" 4K IPS / ASUS 24" ProArt 1200p IPS / GStory 1080p/166Hz GSync/FreeSync IPS / Quest 2 HMD

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Given that the M1 architecture is ham-strung by Rosetta 2 for most of the games we're seeing benchmarks for, I'm finding those framerates pretty impressive.

    I'm sure we'll see a handful of native Mac games appear that really exploit the M1/Pro/Max APUs properly - removing the need for all of the compromises that come from the double-whammy of converting from x86 to ARM on the fly and from DX12 / OpenGL / Vulkan to Metal (and probably some translation of audio and controller-side stuff too).

    However 'AAA' gaming is unlikely to become a thing on the Mac for the forseable - too damn expensive for a games machine for the masses - and too small a user base to justify multi-year game development projects.

    I'd wager they could see some significant growth in other areas though - from forward-looking iPad / iOS developers moving some of their games titles over to harness the much beefier internals of M1-based Macs. The use of Metal in this context makes more sense, offering an easy & naturual progression for the seeming legions of iOS/iPadOS games developers. But there will likely remain a huge gap in quality and scope between titles like the 'Asphalt' series on M1 compared to 'Forza Horizon' on PC/XBox.

  12. #11
    Senior Member
    Join Date
    Jun 2013
    Location
    ATLANTIS
    Posts
    1,207
    Thanks
    1
    Thanked
    28 times in 26 posts

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    MAC vs PC is like the talk between German luxury sedans vs Japanese luxury sedans where the latter is cheaper but EXTREMELY reliable in everything.

  13. #12
    Senior Member
    Join Date
    May 2014
    Posts
    2,385
    Thanks
    181
    Thanked
    304 times in 221 posts

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Quote Originally Posted by CAT-THE-FIFTH View Post
    I am looking based on process nodes used and transistors used. AMD,etc are doing more with less - Apple is throwing transistors at the problem and its no different than what Intel Arc is doing. The big issue here is what happens if TSMC has a slip up and Apple is stuck on a node for 2 or 3 generations? AMD and Nvidia have had to engineer ways around this.
    Until the slip up happens, we're just enjoying that TSMC is nailing their dates to the wall and comparatively hitting them each time.

    Quote Originally Posted by CAT-THE-FIFTH View Post
    Apple is incredibly reliant on TSMC delivering.
    Yes, and as above, until evidence to the contrary, it'll keep going.

    Quote Originally Posted by CAT-THE-FIFTH View Post
    A far amount of dGPUs is also not shaders too. Look at Navi 22 as an example.

    The Apple block diagrams,just circle the physical shaders - on Navi22 just about 30% of the die is actual shaders. Now consider that TSMC 7NM is much less denser than second generation TSMC 5NM N5P,that Apple dGPU in a 425MM2 SOC,has far more transistors than the 335MM2 7NM Navi 22 die!! Those RX6800M laptops use Navi22.

    People like AMD,have to have decent backward compatibility on both dGPU and CPU,so have to use transistors for that too,and emulation(and virtualisation) still needs to be done on AMD/Intel cores too.
    So what I'm seeing here is that Apples first serious weighing in on a desktop/workstation laptop grade GPU inside a processor is falling short to you because they're not yet efficient on transistor to effect? I doubt we'll ever get it but a proper breakdown on how much of the M1 Max is actually for what because there is a substantial amount of the die space unlabelled as of yet where obviously the majority is the 32 core GPU and its SLC blocks which are likely an L2 cache for the GPU. They have a very inefficient yet powerful GPU in there but as you've noted is likely because of the terrible API layer hamstringing potentially a lot of performance and the Rosetta 2 translation as well.

    Quote Originally Posted by CAT-THE-FIFTH View Post
    So all I am seeing is:
    1.)Apple needing to be on best node possible which costs a lot
    2.)Apple having to use a massive SOC on a newish node,which means massive costs and poor yields
    3.)Apple having to use the latest low power(and expensive LPDDR5) using a ton of channels to compete
    4.)Apple needing to package the RAM close to the die(just like the AMD HBM solution),which probably increases complexity
    5.)Having to use fixed function hardware,and GPU compute to beat the competitors, hence having to spend a lot on specific software optimisations to do that
    6.)Also meaning if the software is not supported,or you use legacy applications performance will be patchy
    1) Being first is Best, didn't you know?
    2) TSMC state they have a better defect density for N5 than N7 so it's likely they're getting pretty good chips yields meaning the high cost is just the usual Apple tax
    3) To compete or moving with the times, both AMD and Intel are going to be DDR5 in upcoming systems. DDR5 has also been heralded as potentially being great for iGPUs because of the massive bandwidth so seems like a solid move tbh
    4) Doubt it, it's not hard to have a trace on substrate, just most don't do it because it limits the OEMs from making changes in design. Apple are their own designer and manufacturer and means they can lock in zero upgradability even further by making the RAM part of the design of the processor rather than soldered onto a mobo. By making it part of the chip design, it's harder to argue that they don't need soldered on RAM meaning they don't have to provide any form of upgradability.
    5) What specific fixed function hardware are you meaning? If you mean the x86 emulation accelerator, that's been an absolute boon to adoption of the M1 and was probably the best idea they could have come up with that smashed Microsoft in the face against their own Arm offering from QualPoo. It's a closed ecosystem become more closed, of course they're spending a lot on their software optimisations, I don't see your point here at all.
    6) This is Apple we're talking about, they literally left their PowerPC Apple uses out in the rain to die from Dysentry. This is just who they are, I'm amazed they even did the x86 emulation accelerator but it was a good decision they did as it will help a lot of users move over with little issue.

    Quote Originally Posted by CAT-THE-FIFTH View Post
    So its great if you run within the range of software Apple is interested in,which has been always the case with Macs going back decades. Even the G3,G4 and G5 Mac could do quite well in the areas they were built for. I know because I have had to work in plenty of mixed Mac/PC environments before.Even the new M1 MacBooks look nice products overall.
    Thank you for providing your credentials again, we're on the same page of the book.

    Quote Originally Posted by CAT-THE-FIFTH View Post
    But the issue,is that Apple needs to move to chiplets at some point.

    AMD/Intel have progressed further along the chiplet/heterogenuous node manufacturing route(especially as they are far more experienced in packaging) because the reality is relying on new nodes(and chucking tons of transistors at the problem),is going to become harder and harder as the shrinks get harder too.
    Whatever works for them, they're not exactly focusing on making their products as cheap as possible to beat out competition, they're still the boutique PC maker and am willing to bet money that they're just soaking up monolithic die costs because they can and not because they feel they should. They'll probably move to chiplets at some point as well and "it'll be the best ever" but I don't see much reason for them to now unless they start breaching 500mm^2 easily.

    Quote Originally Posted by CAT-THE-FIFTH View Post
    Its why GPU chiplets are going to be a thing soon,and even why AMD went that way with their CPUs. Its about decreasing chip size,and making bits which are less node dependent on cheaper and plentiful nodes. But you also need to invest in decent I/O fabric. A Ryzen 9 5950X,for example, is made of two 80MM2 7NM chiplets,and a 125MM2 I/O die on an ancient 12NM/14NM node,using cheap DDR4. AMD has also proven,by using 3D packaging they can get notable performance and effiency improvements,via simple stacking more chiplets on top(which is cheaper than making even bigger chiplets).
    See my comment above. Addition, Apple has managed to make decent efficiency improvements by being on the latest node and through their accelerators and design so I guess they'll get there when they get there.

    Quote Originally Posted by CAT-THE-FIFTH View Post
    It was why Intel Lakefield,was more notable for how it was made(than the final product),and stuff like delinking production from process nodes,is increasingly going to be important. So is putting R and D into lowish power connectivity(the I/O fabric) between the various parts. AMD and companies like Fujitsu have put a lot of effort into power reduction in the latter area. We have to see how Intel and Nvidia can do this as they are moving to chiplets too.

    If you look at both Zen3 and RDNA2,AMD managed to get decent gains on the same node(yes,I don't like the prices, but I can't deny I am impressed by how AMD managed to get big efficiency and per transistor performance improvements from the same node).
    Intel Lakefield was heterogenous only in the fact it was using a different node for for its interposer, otherwise it was all a single node for all the important bits. Lakefield was most notable for how utter garbage it was, likely because of Windows but it's in the same category as Cannon Lake which Intel obviously wants everyone to forget about. We have no hard solid data on TDPs but they've fit an up to 6600m/3060m competitor in a "50w" envelope. Don't know about you, but that is quite an efficiency and per transistor power/performance jump.

    Quote Originally Posted by CAT-THE-FIFTH View Post
    Maybe we need to agree to disagree!
    As long as we can keep this civil, we don't.

  14. #13
    Senior Member
    Join Date
    Apr 2016
    Posts
    772
    Thanks
    0
    Thanked
    9 times in 9 posts

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Well it does not seem to perform well overall, but for work tasks it is designed for, then it is brilliant, but it seem like a small niche that would need that.

    I think they are going to have tough competetion cost wise, because og a bad yield.

  15. #14
    Senior Member
    Join Date
    Mar 2014
    Posts
    260
    Thanks
    0
    Thanked
    7 times in 6 posts

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    I wonder what the performance/watt is compared to the other systems.

  16. #15
    Missed by us all - RIP old boy spacein_vader's Avatar
    Join Date
    Sep 2014
    Location
    Darkest Northamptonshire
    Posts
    2,015
    Thanks
    184
    Thanked
    1,086 times in 410 posts
    • spacein_vader's system
      • Motherboard:
      • MSI B450 Tomahawk Max
      • CPU:
      • Ryzen 5 3600
      • Memory:
      • 2x8GB Patriot Steel DDR4 3600mhz
      • Storage:
      • 1tb Sabrent Rocket NVMe (boot), 500GB Crucial MX100, 1TB Crucial MX200
      • Graphics card(s):
      • Gigabyte Radeon RX5700 Gaming OC
      • PSU:
      • Corsair HX 520W modular
      • Case:
      • Fractal Design Meshify C
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • BenQ GW2765, Dell Ultrasharp U2412
      • Internet:
      • Zen Internet

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Aren't running gaming benchmarks on Apple products like 0-60 times for lawnmowers? Sure they may be impressive but very few people will buy one for that purpose and it has little bearing on what they will be used for.

  17. #16
    Long member
    Join Date
    Apr 2008
    Posts
    2,427
    Thanks
    70
    Thanked
    404 times in 291 posts
    • philehidiot's system
      • Motherboard:
      • Father's bored
      • CPU:
      • Cockroach brain V0.1
      • Memory:
      • Innebriated, unwritten
      • Storage:
      • Big Yellow Self Storage
      • Graphics card(s):
      • Semi chewed Crayola Mega Pack
      • PSU:
      • 20KW single phase direct grid supply
      • Case:
      • Closed, Open, Cold
      • Operating System:
      • Cockroach
      • Monitor(s):
      • The mental health nurses
      • Internet:
      • Please.

    Re: Gaming performance of Apple M1 Pro, M1 Max investigated

    Quote Originally Posted by spacein_vader View Post
    Aren't running gaming benchmarks on Apple products like 0-60 times for lawnmowers? Sure they may be impressive but very few people will buy one for that purpose and it has little bearing on what they will be used for.
    Good point, but I'd argue if they are advertising graphics performance, they're going to be judged on it.

    Apple have a problem with honesty and integrity. I've been shafted by them and won't buy again. Nice machines and so on, but there's so much bad design (dodgy keyboards, melting glue on "unibody" chassis, some obsession with sticking high voltage backlight lines next to data lines in areas with no fluid ingress protection!), anti consumer and anti repair behaviour that I just can't justify another. The price is now waaay up there and the thing is... they can charge it and get away with it.

    I'm really interested to see where these M1 chips go and I think it's great to see another company doing a different approach which provides a genuine alternative. I won't be buying one but I'm following them with great interest.

    We have to bear in mind these are new and there will be performance inconsistencies and niggles to iron out. What is the killer for me is Apple's bull marketing that infuriates me. They're telling people they've invented a new type of memory where they have 16GB of RAM but can expand it using the SSD to create super speed new memory... The "we've invented" part isn't the worst part. It's that apparently it's using these paging files routinely, hammering the SSD and decreasing lifespan. And it's soldered on so there's no chance of a replacement. They may have resolved this "tiny" issue, but if you're a professional who handles big files and you're routinely using the SSD as virtual memory, you're gonna reduce the lifespan significantly, but there was no option for >16GB RAM which is madness for a "professional" grade machine.

    At least they've brought ports back.

Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •