Page 10 of 13 FirstFirst ... 78910111213 LastLast
Results 145 to 160 of 196

Thread: AMD discrete GPU market share eroded to less than 20 per cent

  1. #145
    Registered+
    Join Date
    Dec 2013
    Posts
    31
    Thanks
    0
    Thanked
    0 times in 0 posts
    • GrimMachine's system
      • Motherboard:
      • Asus Z87-Pro
      • CPU:
      • I5-4570K 4.2 Ghz OC
      • Memory:
      • 16Gb G-Skill Ripjaws
      • Storage:
      • loads of SSD's
      • Graphics card(s):
      • 2 x MSI 970GTX SLI
      • PSU:
      • Corsair HX850i Silver
      • Case:
      • Fractal Design R5 Black
      • Operating System:
      • Windows 7 & 8.1 Dual boot
      • Monitor(s):
      • 3 x BenQ RL2455, 1 x BenQ XL2411t 144hz
      • Internet:
      • 72Mb

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by crossy View Post
    You should have PM'd that bit - pointless and doesn't add to the discussion.

    Now here I'm going to agree. Heck, as has been said above Microsoft didn't exactly drop DX12 "professionally" and it's (by all accounts) still pretty darned ropey.


    Good post.

    You've pretty much nailed it - for all our loudness here, we're only a tiny part of the "buying public" and the overwhelming majority will just take what the manufacturers give them. And lets not kid ourselves AMD have done pretty well with their console offerings and also the very low end APU stuff. Problem is that NVidia have pretty much solid mindshare on the higher end bundled offerings - take a trip to Dell, HP, Lenovo, etc and you'll see that their high end desktops and laptops (those with discrete GPUs) are pretty much exclusively an Intel+NVidia combo.

    Now as someone who's currently running an AMD+AMD setup (cpu+gpu) I'd love to mislead myself into thinking "ooh, big conspiracy", after all, it's not as if Intel doesn't have "form" in that area... Nope, AMD's problems are simply "too late" and that Intel/NVidia are better at marketing than they are. Look at the high-end CPU's - Zen isn't due until next year by which time Intel will undoubted match it. Likewise on GPU's NVidia seems to have a "sausage machine" of new products, and I'm sorry to say that there's a group of (vocal!) folks out there who'll always pick the newest, serial upgraders as it were.

    AMD products are pretty price competitive (always have been) but that's no help if they're seen as inefficient, hard to live with (noisy), low end, poorly supported (drivers and apps) or based on "last years tech". As seems usual with big companies these days the "bean counters" have utterly failed to appreciate that their R&D department is an "asset" not an "overhead". They've culled the "geeks" and now the cupboard is bare.

    We tend to agree on most points here.

    It wasn't always the case with PC manufacturers choosing Nvidia over AMD, there always used to be a choice, however recently a reference 290 or 290x would have been a bad choice for a system builder like Dell due to the heat issues and with the new Fury X having a dedicated water cooling loop, that would run into issues with manufacturers standard builds so I can see why big manufacturers are choosing Nvidia, although I'm sure some money changing hands also plays it's part.

    As far as high end Gaming Laptops are concerned, AMD hasn't released anything new since 2012 with the 7970m, the 8970m and R9-290m were just straight rebrands while Nvidia released the 680 to compete with the 7970m then the 780m which had a higher shader count then the short lived 880m which was a rebrand but then the 970m and 980m which are Maxwell. Rumours abound of a new mobile chip coming from Nvidia as well, so as you can see Nvidia have been releasing more products over the last 3 years than AMD so no surprises that in the high end space they are winning here.

  2. #146
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by Corky34 View Post
    To say AMD GPUs are inferior is a little misleading (imho) as how good a GPU is depends on the type of work it's best suited for and what you're asking it to do, its generally accepted that presently GCN is better at handling parallel workloads and Maxwell better with serial workloads, if the parallel nature of DX12 was introduced in DX11 we would have a very different situation, it's why (imo) AMD was forced to develop an API that would leverage the parallel nature of their GPUs.
    DX11 is what it is, and I think everyone had good visibility of that including AMD.

    I think the problem here is that AMD seem APU obsessed, and what they are making is a compute engine. I mean, just look at Fury and even the old 290 fly at OpenCL:

    http://www.phoronix.com/scan.php?pag...-r9-fury&num=7

    so wanting an API that can better make use of those resources isn't surprising.

    ( Don't clink on earlier pages in that review unless you have a strong stomach, it's a Linux review for an AMD GPU so generally it isn't pretty )

  3. #147
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by Corky34 View Post
    No it wasn't, MSAA was disabled, not all ALL forms of Anti Aliasing.
    ...snip...
    I might be missing part of the discussion but what AA was left enabled? MSAA is a pretty intensive AA method, though not as much as SSAA, while most other methods I can think of tend to run faster. Post-processing AA like MLAA and FXAA are generally very fast and can often be enabled with negligible performance impact on modern games.

    Saying 'MSAA disabled' does not imply some other form of AA is enabled at all, if that's what you're saying. But like I say I could be missing the point.
    Last edited by watercooled; 28-08-2015 at 11:24 AM.

  4. #148
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by DanceswithUnix View Post
    DX11 is what it is, and I think everyone had good visibility of that including AMD.
    <snip>
    Yes it is, sorry if what i said may have come across as blaming it on Microsoft, that wasn't my intention.

    As you rightly point out DX11 was what it was and the blame lays squarely in AMD's court for misreading, predicting it wrongly, or what ever the reasons are for why they made a GPU microarchitecture that was better suited for parallel workloads when the software wasn't there to support it, fact is they judged it wrong, that's not to say GCN is better than Maxwell or visa versa, it's just to say they have their own strengths and weaknesses.

    Quote Originally Posted by watercooled View Post
    I might be missing part of the discussion but what AA was left enabled? MSAA is a pretty intensive AA method, though not as much as SSAA, while most other methods I can think of tend to run faster. Post-processing AA like MLAA and FXAA are generally very fast and can often be enabled with negligible performance impact on modern games.

    Saying 'MSAA disabled' does not imply some other form of AA is enabled at all, if that's what you're saying. But like I say I could be missing the point.
    Honestly I couldn't tell you. In all the spurious nonsense Jimbo75 was spouting I kind of lost track of what benchmarks he was using to backup his claim that Nvidia got caught lying about MSAA.

  5. #149
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    WRT the AotS benchmarks, it seems the developers made a blog post about the MSAA issue which was discussed in the Ars article.
    Do a ctrl+F for MSAA to find the section: http://arstechnica.co.uk/gaming/2015...nt-for-nvidia/
    Which ends with:
    All that said, in order to remove any doubt from the benchmarks, all tests were run with MSAA disabled.
    So whether or not there are any MSAA bugs, they won't be affecting the benchmarks because it's not being used. If Nvidia were/are solely blaming an MSAA bug for their DX12 performance regression (I've not really been following it so I'm not sure if that's the case) then it would appear they're mistaken.

    Ars also links the full blog post from the devs:
    http://www.oxidegames.com/2015/08/16...-of-a-new-api/

  6. #150
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    To be fair I went over all this with Jimbo75 so you'll have to forgive me for not wanting to go over old ground, agree or disagree with what I've said that's up to you, at this point I'm past the point of caring, no offense intended, I just don't want to subject people to yet more arguments over whether a supposed bug in the MSAA code is, or is not, distorting the results some sites got, it just turns into a He said She said situation and the best way not to go down that road is to test without AA (IMO).

  7. #151
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    I haven't read over the whole thread TBH as I've been really quite busy working and it's grown quite long now. From what I can see though, whether or not there is an MSAA bug, though the developers of the game assert there is not, Ars did not enable MSAA to remove any doubt so it's irrelevant.

    That's my take on it coming into the debate fresh, without having read most of the thread.

  8. #152
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    But did they disable all forms of anti aliasing? Like I said just a single page back...
    Quote Originally Posted by Corky34 View Post
    Dan Baker, co-founder of Oxide Games said "Some optimizations that the drivers are doing in DX11 just aren’t working in DX12 yet.".
    And that even though Ars disabled MSAA, they did not (afaik) disable ALL forms of Anti Aliasing, without disabling ALL forms of Anti Aliasing you're introducing a performance degradation that MSAA was specifically intended to alleviate, MSAA can (afaik) provide similar quality at higher performance, or better quality for the same performance, thus if it's not working correctly you take a performance hit to get the same quality when using other forms of AA.
    That aside they tested a Radeon R9 290X, GCN 1.1, DX12 feature level 12_0, against a GeForce 900 Series, Maxwell 2.0, DX12 feature level 12_1.
    They tested two cards with different DX12 feature sets, the R9 290X would have been missing Conservative Rasterization and Rasterizer Order Views (afaik).

  9. #153
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    I already covered that. 'MSAA disabled' generally does not mean 'but we enabled something else instead'. So unless there's some catastrophic bug with every AA implementation, which is hugely unlikely as different types work substantially differently and at different stages of the rendering pipeline, it really doesn't seem like a valid argument TBH. It's quite normal for game/hardware reviews to include multiple benchmarks, with MSAA on and off, I don't see why this is any different.

    Does the game demand or even use 12_1 features? Because they can be mostly implemented in software on the shaders which would lower AMD performance if anything. You don't generally see reviews having some feature enabled on brand A and disabled on rand B when comparing apples-to-apples and then complaining about how the one card is slower with the extra feature enabled. Having 12_1 features doesn't magically mean all DX12 games will use them or that they could even benefit from them. They're not even desirable a lot of the time. 'Oh but 12_1' seems to be used incorrectly as a trump card with anything when it comes to Maxwell but like I say, it's not always such a big deal.

    TBH it really sounds like a big case of clutching at straws to find an excuse by way of blaming the developer or AMD without anything to back it up.

  10. #154
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by watercooled View Post
    I already covered that. 'MSAA disabled' generally does not mean 'but we enabled something else instead'. So unless there's some catastrophic bug with every AA implementation, which is hugely unlikely as different types work substantially differently and at different stages of the rendering pipeline, it really doesn't seem like a valid argument TBH. It's quite normal for game/hardware reviews to include multiple benchmarks, with MSAA on and off, I don't see why this is any different.
    Because the developer said "Some optimizations that the drivers are doing in DX11 just aren’t working in DX12 yet"?
    For all we know those optimizations that the drivers are doing in DX11 and are not working in DX12 are related to not only MSAA but all types of AA.

    Quote Originally Posted by watercooled View Post
    Does the game demand or even use 12_1 features?
    Who knows?

    Quote Originally Posted by watercooled View Post
    Because they can be mostly implemented in software on the shaders which would lower AMD performance if anything. You don't generally see reviews having some feature enabled on brand A and disabled on rand B when comparing apples-to-apples and then complaining about how the one card is slower with the extra feature enabled. Having 12_1 features doesn't magically mean all DX12 games will use them or that they could even benefit from them. They're not even desirable a lot of the time. 'Oh but 12_1' seems to be used incorrectly as a trump card with anything when it comes to Maxwell but like I say, it's not always such a big deal.
    Implemented in software that by admission of the developer isn't optimised to the same extent as the drivers were in DX11.

    Re DX feature levels: What you seem to be saying is that it's OK to judge the performance of two cards that support different features, of which, going on what this article on Anandtech says is a "performance intensive solution", and of the other it says "ROVs will also be usable for other tasks that require controlled pixel blending, including certain cases of anti-aliasing."

    Quote Originally Posted by watercooled View Post
    TBH it really sounds like a big case of clutching at straws to find an excuse by way of blaming the developer or AMD without anything to back it up.
    IIRC no one has blamed AMD, although I would say the developer has caused a great deal of unnecessary fuss what with their flip flopping over whose to blame, and an unyet provable poor implementation of DX12, something that is unprovable until we get more that a single developers implementation of DX12 to test.

  11. #155
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by Corky34 View Post
    Because the developer said "Some optimizations that the drivers are doing in DX11 just aren’t working in DX12 yet"?
    For all we know those optimizations that the drivers are doing in DX11 and are not working in DX12 are related to not only MSAA but all types of AA.
    Where on Earth have they said they are using any AA? And like I said different types of AA work VERY differently. You might as well blame it on ambient occlusion, shadows or texture resolution as grouping all AA together. And if they're 'not working in DX12' as the dev says - surely that would affect AMD too? They dismissed the claim that there's a brand-specific bug with AA.

    Quote Originally Posted by Corky34 View Post
    Re DX feature levels: What you seem to be saying is that it's OK to judge the performance of two cards that support different features, of which, going on what this article on Anandtech says is a "performance intensive solution", and of the other it says "ROVs will also be usable for other tasks that require controlled pixel blending, including certain cases of anti-aliasing."
    Lets start at the beginning. Things like ROVs and CR are tools to use by developers to do certain things in games. They are not exclusive to any GPU architecture and in fact here is a document discussing implementing CR on a Geforce FX card. What 12_1 implies is the capability to do certain things in hardware where they would have previously been performed in software. Some architectures will benefit more or less than others from this optimisation but at the end of the day the 12_1 features are a way to do the SAME THING but FASTER. You're not losing features as far as the end product is concerned ala PhysX.

    WRT the Anandtech quote, CR is indeed more performance intensive than point sampling which is why an efficient hardware implementation can make sense if the architecture struggles with it in software. A rough analogy would be AES-NI instructions in modern CPUs - all CPUs are quite capable of performing AES encryption, but having hardware acceleration makes certain steps much faster. You don't claim Core2 'can't do AES' because it doesn't support AES-NI. However in consumer software, it's really not common for raw encryption throughput to be a bottleneck - you're not going to be maxing the CPU with just AES instructions in the same way you're not going to create a game scene from just CR or ROV.

    Quote Originally Posted by Corky34 View Post
    IIRC no one has blamed AMD, although I would say the developer has caused a great deal of unnecessary fuss what with their flip flopping over whose to blame, and an unyet provable poor implementation of DX12, something that is unprovable until we get more that a single developers implementation of DX12 to test.
    They released their game to benchmark. People ran the benchmarks and posted the results. Other people grabbed the pitchforks assuming there's some dark conspiracy as to why it regresses in DX12 performance on Nvidia GPUs. I don't know what's 'provably poor' about what they've done. Release performance remains to be seen of course.

  12. #156
    Senior Member
    Join Date
    Dec 2013
    Posts
    3,526
    Thanks
    504
    Thanked
    468 times in 326 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by watercooled View Post
    Where on Earth have they said they are using any AA? And like I said different types of AA work VERY differently. You might as well blame it on ambient occlusion, shadows or texture resolution as grouping all AA together. And if they're 'not working in DX12' as the dev says - surely that would affect AMD too? They dismissed the claim that there's a brand-specific bug with AA.
    They haven't said they are, or are not using any AA and that's part of the problem.

    If you had taken the time to read what's already been said you would see that a bug with MSAA wouldn't necessarily effect AMD in the same way as Nvidia, seeing as how AotS was the poster child for AMD's Mantle and was only later ported over to DX12, the developer has had much more time optimising his code to run on AMD hardware, with DX12 performance is more dependent on the developer.

    Quote Originally Posted by watercooled View Post
    Lets start at the beginning. Things like ROVs and CR are tools to use by developers to do certain things in games. They are not exclusive to any GPU architecture and in fact here is a document discussing implementing CR on a Geforce FX card. What 12_1 implies is the capability to do certain things in hardware where they would have previously been performed in software. Some architectures will benefit more or less than others from this optimisation but at the end of the day the 12_1 features are a way to do the SAME THING but FASTER. You're not losing features as far as the end product is concerned ala PhysX.
    Yes starting at the beginning would be a very good idea, may I suggest that you do exactly that and read the rest of this thread so you're aware that this has already been covered, WRT the developer having a long history with AMD and how the game/benchmark has spent the majority of its life being developed on AMD's Mantle API, and on AMD hardware.

    Also if you're seriously saying that it would be feasible to do Conservative Rasterization and Rasterizer Order Views in software then maybe you're underestimating how long that would take, the Anandtech article I linked to specifically says "The textbook use case for ROVs (Rasterizer Order Views) is Order Independent Transparency...these earlier OIT implementations would be very slow due to sorting, restricting their usefulness outside of CAD/CAM"

    Quote Originally Posted by watercooled View Post
    WRT the Anandtech quote, CR is indeed more performance intensive than point sampling which is why an efficient hardware implementation can make sense if the architecture struggles with it in software. A rough analogy would be AES-NI instructions in modern CPUs - all CPUs are quite capable of performing AES encryption, but having hardware acceleration makes certain steps much faster. You don't claim Core2 'can't do AES' because it doesn't support AES-NI. However in consumer software, it's really not common for raw encryption throughput to be a bottleneck - you're not going to be maxing the CPU with just AES instructions in the same way you're not going to create a game scene from just CR or ROV.
    Well in that case lets get rid of all this dedicate hardware and switch to software based implementations.

    Yes they could/can be done in software but the performance degradation is so great that it makes it all but useless for something like a game, you just wouldn't use it, we're not talking about waiting for something like a Core2 to do AES, we're talking about processing something at least 30 times a second.

    Quote Originally Posted by watercooled View Post
    They released their game to benchmark. People ran the benchmarks and posted the results. Other people grabbed the pitchforks assuming there's some dark conspiracy as to why it regresses in DX12 performance on Nvidia GPUs. I don't know what's 'provably poor' about what they've done. Release performance remains to be seen of course.
    That's already been covered with Jimbo75, and why when replying to you that I said I didn't want to go over old ground, or subject people to yet more arguments.
    Last edited by Corky34; 28-08-2015 at 03:32 PM.

  13. #157
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by Corky34 View Post
    If you had taken the time to read what's already been said you would see that a bug with MSAA wouldn't necessarily effect AMD in the same way as Nvidia, seeing as how AotS was the poster child for AMD's Mantle and was only later ported over to DX12, the developer has had much more time optimising his code to run on AMD hardware.
    But... again... they're not using MSAA?

    Quote Originally Posted by Corky34 View Post
    Yes starting at the beginning would be a very good idea, may I suggest that you do exactly that and read the rest of this thread so you're aware that this has already been covered, WRT the developer having a long history with AMD and how the game/benchmark has spent the majority of its life being developed on AMD's Mantle API, and on AMD hardware.
    But the rest of the thread is irrelevant - I was directly answering the recent posts you made in reply to mine, I haven't said a word about things I haven't read.

    Quote Originally Posted by Corky34 View Post
    Also if you're seriously saying that it would be feasible to do Conservative Rasterization and Rasterizer Order Views in software then maybe you're underestimating how long that would take, the Anandtech article I linked to specifically says "The textbook use case for ROVs (Rasterizer Order Views) is Order Independent Transparency...these earlier OIT implementations would be very slow due to sorting, restricting their usefulness outside of CAD/CAM"
    It depends on architecture and they can and has been done in software. Feasibility for games in software is another question but Intel seem to think the potential speedup for CR for GCN is far less than it was for Kepler: https://software.intel.com/en-us/art...commodity-gpus

    Note the mecha demo: http://developer.amd.com/resources/d...al-time-demos/
    Order-independent transparency as shown in the demo is something which ROV is useful for. Doing it in software is not nearly as fast, but it's doable.

    Quote Originally Posted by Corky34 View Post
    Well in that case lets get rid of all this dedicate hardware and switch to software based implementations.
    Why would you want to do that? Having something being more efficient is not a negative and I never said anything else. It's just not so binary as hardware being necessary to do it in all circumstances, which is exactly why I chose AES-NI as an analogy.

    However, lets assume for a second that this developer is so close to AMD, and that software implementations of these features are not feasible for the games - why the heck would they go to the length of implementing additional e.g. transparency or shadow rendering methods specifically for Maxwell cards? Doesn't add up does it? As I explained, like many extensions these are not simply magical features which will automatically run on any DX12 game to make it look better somehow. They're performance optimisations which have to be implemented and specifically called by the developers.

    Assuming that since the hardware is present that it will be automatically utilised by the game and therefore make it run worse makes no sense at all.

    Edit: And even if it were the case that they implemented different shadow/transparency features making use of the hardware features, it would make complete sense for these to be in-game options, because obviously different techniques would be necessary for cards without hardware support. So as I also said earlier, it would make no sense for someone benchmarking the game to enable a setting on one GPU and not on another.
    Last edited by watercooled; 28-08-2015 at 03:59 PM.

  14. #158
    Admin (Ret'd)
    Join Date
    Jul 2003
    Posts
    18,481
    Thanks
    1,016
    Thanked
    3,208 times in 2,281 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by shaithis View Post
    Er, this threads title kinda says something

    ....
    Yes .... but what does it say?

    It might be that, for instance, it says nVidia have 10x the marketing budget for video cards that AMD have.

  15. #159
    Senior Member watercooled's Avatar
    Join Date
    Jan 2009
    Posts
    11,478
    Thanks
    1,541
    Thanked
    1,029 times in 872 posts

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Correlation and causation, and all that...

  16. #160
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: AMD discrete GPU market share eroded to less than 20 per cent

    Quote Originally Posted by watercooled View Post
    Correlation and causation, and all that...
    Eat cheese if you want an engineering degree


  17. Received thanks from:

    Saracen (28-08-2015)

Page 10 of 13 FirstFirst ... 78910111213 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •