Page 27 of 137 FirstFirst ... 717242526272829303747576777127 ... LastLast
Results 417 to 432 of 2179

Thread: AMD - Zen chitchat

  1. #417
    Spreadie
    Guest

    Re: AMD - Zen chitchat

    Quote Originally Posted by Spreadie View Post
    That's interesting - I was told the 7A37vA4 BIOS was the AGESA update.

    I'll have a look at the BETA.
    Needless to say, I had no luck with the BETA and sent it back. Scan's techies have just confirmed the board won't let the RAM stay at anything above 2133MHz - it'll accept higher memory clocks but reverts to 2133 after a reboot. It throws a fit if you switch on XMP too.

    My CPU was fine though - just waiting for them to return it with a replacement board.

  2. #418
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Zen chitchat

    It looks like GF 7NM is more orientated towards high performance and the first TSMC 7NM released will be more orientated towards power saving:

    https://forums.anandtech.com/threads...#post-38952877

  3. #419
    Senior Member Xlucine's Avatar
    Join Date
    May 2014
    Posts
    2,162
    Thanks
    298
    Thanked
    188 times in 147 posts
    • Xlucine's system
      • Motherboard:
      • Asus prime B650M-A II
      • CPU:
      • 7900
      • Memory:
      • 32GB @ 4.8 Gt/s (don't want to wait for memory training)
      • Storage:
      • Crucial P5+ 2TB (boot), Crucial P5 1TB, Crucial MX500 1TB, Crucial MX100 512GB
      • Graphics card(s):
      • Asus Dual 4070 w/ shroud mod
      • PSU:
      • Fractal Design ION+ 560P
      • Case:
      • Silverstone TJ08-E
      • Operating System:
      • W10 pro
      • Monitor(s):
      • Viewsonic vx3211-2k-mhd, Dell P2414H
      • Internet:
      • Gigabit symmetrical

    Re: AMD - Zen chitchat

    The data flow rates possible with epyc is getting me excited for the possibility of AMD finally figuring out how to have several blocks of shaders sat on an interposer acting as a single GPU. While epyc is great, servers aren't the most sensitive market to initial price and looking at total cost of ownership, monolithic chips aren't very different to several smaller chips. A market that is very sensitive to initial price (and doesn't care too much about running costs) is the graphics sector. Once AMD figure out how make their top end graphics chips the same way they make epyc, they will have solved the GPU market. No-one else will be able to compete, until they work out a way to copy AMD. No matter what monolithic chip nvidia comes out with, AMD will be able to easily undercut it on price. And then keep throwing shader blocks at the next card until the combined area dwarfs anyone's reticule size, so the only way anyone can match it is to pay through the nose for a bleeding-edge node.

  4. #420
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: AMD - Zen chitchat

    Quote Originally Posted by Xlucine View Post
    ... the possibility of AMD finally figuring out how to have several blocks of shaders sat on an interposer acting as a single GPU. ...
    Hmmm, interesting concept. I have no idea if it's possible to make GPUs as an MCM. I rather suspect it's more technically problematic than making MCM CPUs - after all, each die in an EPYC MCM is actually a complete CPU in its own right, and AFAIK the kind of tasks you can parallelise over multiple CPUs don't suffer the same diminishing returns as using multiple GPUs (which really drops off after the second GPU, of course). I like the idea of AMD churning out dies with, say, 2048 shaders and a single HBM2 memory controller, then just stacking 2 or 4 of them on an interposer to create bigger GPUs, but I imagine if it was technically straightforward they'd be doing it already, rather than creating traditional multi-GPU cards...

  5. #421
    Senior Member Xlucine's Avatar
    Join Date
    May 2014
    Posts
    2,162
    Thanks
    298
    Thanked
    188 times in 147 posts
    • Xlucine's system
      • Motherboard:
      • Asus prime B650M-A II
      • CPU:
      • 7900
      • Memory:
      • 32GB @ 4.8 Gt/s (don't want to wait for memory training)
      • Storage:
      • Crucial P5+ 2TB (boot), Crucial P5 1TB, Crucial MX500 1TB, Crucial MX100 512GB
      • Graphics card(s):
      • Asus Dual 4070 w/ shroud mod
      • PSU:
      • Fractal Design ION+ 560P
      • Case:
      • Silverstone TJ08-E
      • Operating System:
      • W10 pro
      • Monitor(s):
      • Viewsonic vx3211-2k-mhd, Dell P2414H
      • Internet:
      • Gigabit symmetrical

    Re: AMD - Zen chitchat

    You get diminishing returns with more GPU's, but not more shaders (and all the other fixed function bits in a modern GPU). There's something going on with multiple GPUs vs one GPU that's twice the size, and it's probably latency or bandwidth related (at least, that's the only immediate difference). Bringing the GPUs closer together with more traces connecting them is bound to improve this.

    Thinking about it more, I'm wrong about the effect on power consumption. If die area is cheap, and you can go slow&wide, then that's bound to have a beneficial impact on power efficiency because you can always go slower & wider (as long as the interconnects aren't taking too much power, although with the inter die comms in epyc <10W that's unlikely)

  6. #422
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: AMD - Zen chitchat

    Quote Originally Posted by Xlucine View Post
    You get diminishing returns with more GPU's, but not more shaders ...
    That's true within a monolithic chip, sure. But it's a big jump to assume that you can therefore produce multiple dies with less shaders and stitch them together into an efficient multi-die GPU.

    As I said, if it were technically straightfoward I would've expected it to have been done before: both AMD and Intel have been churning out MCM CPUs for fun for more than a decade. Now, it's possible that inter-die communication was a bigger issue for GPUs, and that AMD's infinity fabric alleviates some of that problem, but even then there are absolutely no hints or rumours that MCM GPUs are coming. And the most likely reason for that, is that they're not practical yet. After all, AMD have been talking about 2.5D "systems" - with a CPU, GPU, RAM etc all stacked onto an interposer - for years now, yet as far as I can tell they're nowhere near producing one yet...

  7. #423
    Senior Member
    Join Date
    Jul 2009
    Location
    West Sussex
    Posts
    1,722
    Thanks
    199
    Thanked
    243 times in 223 posts
    • kompukare's system
      • Motherboard:
      • Asus P8Z77-V LX
      • CPU:
      • Intel i5-3570K
      • Memory:
      • 4 x 8GB DDR3
      • Storage:
      • Samsung 850 EVo 500GB | Corsair MP510 960GB | 2 x WD 4TB spinners
      • Graphics card(s):
      • Sappihre R7 260X 1GB (sic)
      • PSU:
      • Antec 650 Gold TruePower (Seasonic)
      • Case:
      • Aerocool DS 200 (silenced, 53.6 litres)l)
      • Operating System:
      • Windows 10-64
      • Monitor(s):
      • 2 x ViewSonic 27" 1440p

    Re: AMD - Zen chitchat

    Well, Navi has been promising 'scaleability' on the roadmaps for a while now:

    What that means is not known of course, but it does seem to point a MCM approach. The problem with Crossfire has been that while graphic loads are highly parallel, two GPUs doesn't seem to work like that.
    One rumour (more forum speculation) is that Navi will have a central part and the other dies just get used as shaders etc. So the central part feeds the shaders etc., rather than the driver trying to split the load among the GPUs.

  8. #424
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: AMD - Zen chitchat

    Quote Originally Posted by kompukare View Post
    ... What that means is not known of course, but it does seem to point a MCM approach. ...
    Sorry, I know I'm a pedant, but ...

    We don't know what it means, but it means MCM? Really?

    I thought I was bad with the rampant speculation

    My issue with suggesting that there's a control block + shaders is that AMD's current approach is to scale the number of control blocks with the number of shaders. If you don't you'll hit a point where the control blocks simply can't feed the shaders fast enough, and your shaders sit idle. It would be hugely complex to have a single control block that can feed an arbitrary number of shader blocks and maintain high utilisation. That's way more complex than how you build a CPU MCM, which is quite literally multiple whole CPUs connected together by dedicated high-bandwidth, low-latency links.

    I think it's more likely that Navi's "scalability" is an indication that they're working on reducing the barriers that cause scalability issues in multi-GPU setups. For a bit of rampant speculation of my own, I'd guess that Navi GPUs will be "aware" of other Navi GPUs in the system and they'll be able to directly schedule tasks to each other. Given Vega's getting a cache controller that can directly access system RAM and even non-volatile storage, I'd say that's a more likely next step than separating the control blocks from the shaders.

    Of course, if that was the case, it might enable MCM graphics if the interconnect between the GPUs is fast enough. But I think the benefit it would really bring is much better discrete mGPU scaling as your GPUs would genuinely work as a single block, rather than as alternating discrete rendering pipelines.

    At that point, MCM graphics would be a bit moot - if you've improved dmGPU scaling sufficiently then you can just build multi-GPU boards with 2, 3, 4 GPUs, all interconnected on the PCB (which is essentially what an MCM does anyway). For GPUs, a conventional MCM really doesn't make sense to me.

    Of course, if we're talking 2.5D GPUs on an interposer, that's a slightly different issue, but I still have concerns over implementation. As I said, if it were technically trivial we'd have seen it already, so there must be some barriers to implementation that haven't been overcome yet. I like the idea, and it's not dissimilar to the 2.5D system-on-interposer they've been talking about for a long time, but I can't shake the feeling that there's a level of complexity that still hasn't been resolved.

    That said, mind you, the marketing slides for Vega were very interesting: http://hexus.net/tech/news/graphics/...ure-uncovered/

    I don't know how much it's just pretties for the marketing, but the various structures within Vega are pictured - on those slides at least - as blocks on an interposer. Could just be a pretty way of highlighting the blocks, but I suspect that detail wasn't chosen at random...

  9. #425
    Goron goron Kumagoro's Avatar
    Join Date
    Mar 2004
    Posts
    3,154
    Thanks
    38
    Thanked
    172 times in 140 posts

  10. Received thanks from:

    scaryjim (27-06-2017),Xlucine (28-06-2017)

  11. #426
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: AMD - Zen chitchat

    Interesting stuff. All the key performance metrics sit in the abstract:

    We then propose three architectural optimizations that significantly improve GPM data locality and minimize the sensitivity on inter-GPM bandwidth. Our evaluation shows that the optimized MCM-GPU achieves 22.8% speedup and 5x inter-GPM bandwidth reduction when compared to the basic MCM-GPU architecture. Most importantly, the optimized MCM-GPU design is 45.5% faster than the largest implementable monolithic GPU, and performs within 10% of a hypothetical (and unbuildable) monolithic GPU. Lastly we show that our optimized MCM-GPU is 26.8% faster than an equally equipped Multi-GPU system with the same total number of SMs and DRAM bandwidth.
    There's a lot of numbers in there, but I think there's enough to see why we're not at the point of having MCM-GPUs yet - the basic hypothetical MCM-GPU is barely any faster than an optimised mGPU setup (optimised MCM-GPU is 22.8% faster than a basic MCM-GPU and 26.8% faster* than an equivalent mGPU system - that's not a big difference).

    They suggest some reasonably complex enhancements too, including playing with cache architectures, that basically boil down to trying to keep data local to a single slice of GPU (typical NUMA enhancements, as they say in their conclusions). And perhaps most problematic - those enhancements only benefit about half of their simulated workloads: for some the optimised MCM-GPU is actually slower than the baseline MCM-GPU: which as I've already mentioned isn't significantly faster than a similarly-specified mGPU setup (in this case they simulate a 2-card set-up for mGPU comparison).

    It's nice to know the problem is being looked at, but from their results it looks like it's only ever likely to be relevant for producing GPUs with greater specifications than are possible in a monolithic die, as a monolithic GPU of equivalent specification is always going to be faster. So bigger faster GPUs than you can get from one die, yes; cheaper smaller GPUs by sticking shaders together, probably not.

  12. #427
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Zen chitchat

    The first review of the Gigabyte B350 based mini-ITX motherboard:

    https://lanoc.org/review/motherboard...0n-gaming-wifi

    It also appears that Vega will support DX12 to the same feature level as Nvidia too.

    Quote Originally Posted by sebbbi over on Beyond3D forums
    Just got a Vega FE. Going to test some fp16 optimizations tomorrow. I have a few juicy targets

    At some point I also need to test the paging system. It is awesome that we finally have automated paging on graphics workloads. Let's see how it manages a 32 GB volume texture

    Update: Maybe I should buy extra 32 GB RAM to test with 64 GB volume texture instead
    Quote Originally Posted by Rys who works for RTG and runs Beyond3D
    I handed the board to him [sebbi] myself earlier today in Helsinki, over a beer. The joys of looking after Game Engineering for Europe. Full architecture details will come out later, can't spoil that, but it is FL 12_1 top tier for everything.

  13. #428
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Zen chitchat

    It seems both PCPer and a person from Reddit bought the RX Vega Frontier Edition and it really is a disaster for gaming - it can barely match a GTX1080 whilst sucking down just under 300w. Another crap AMD card launch and this time with shoddy drivers and a card which appears to power throttle since it can barely also maintain 1400MHZ.
    Last edited by CAT-THE-FIFTH; 30-06-2017 at 11:22 AM.

  14. #429
    Not a good person scaryjim's Avatar
    Join Date
    Jan 2009
    Location
    Gateshead
    Posts
    15,196
    Thanks
    1,232
    Thanked
    2,290 times in 1,873 posts
    • scaryjim's system
      • Motherboard:
      • Dell Inspiron
      • CPU:
      • Core i5 8250U
      • Memory:
      • 2x 4GB DDR4 2666
      • Storage:
      • 128GB M.2 SSD + 1TB HDD
      • Graphics card(s):
      • Radeon R5 230
      • PSU:
      • Battery/Dell brick
      • Case:
      • Dell Inspiron 5570
      • Operating System:
      • Windows 10
      • Monitor(s):
      • 15" 1080p laptop panel

    Re: AMD - Zen chitchat

    Quote Originally Posted by CAT-THE-FIFTH View Post
    It seems both PCPer and a person from Reddit bought the RX Vega Frontier Edition ...
    CAT - there is no "RX Vega Frontier Edition". There is Radeon RX Vega, and Radeon Vega Frontier; "different" cards. Vega Frontier, which is the only one that's launched, is a professional card and has no optimised gaming drivers yet.

    It's going to be crap for gaming. That's not surprising.

    How about waiting until the actual gaming card comes out before making snap judgements about Vega's performance in games?

  15. #430
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Zen chitchat

    Quote Originally Posted by scaryjim View Post
    CAT - there is no "RX Vega Frontier Edition". There is Radeon RX Vega, and Radeon Vega Frontier; "different" cards. Vega Frontier, which is the only one that's launched, is a professional card and has no optimised gaming drivers yet.

    It's going to be crap for gaming. That's not surprising.

    How about waiting until the actual gaming card comes out before making snap judgements about Vega's performance in games?
    Unfortunately AMD has said it was for gaming too and it lacks certified drivers for applications like the Pro cards so it's basically an overpriced prosumer card- this is an utter failure of a launch as for the next month there will be more and more reviews testing the gaming side and power consumption is horrible - it seems to be between 280w to 300w.

    Nvidia must be laughing now and AMD is getting slated everywhere for this - what did they think would happen when they released an expensive prosumer card with all the lighting,etc which a proper pro card would not have.

    The other issue since it has zero certified drivers it looks bad relative to AMD pro and firepro cards with older GPUs so amd then compared it to the Titan XP but the problem is the latter is far stronger in games which the utterly rubbishrubbishrubbishrubbish drivers AMD has launched for it.

    I mean what is AMD PR smoking - they keep making idiotic moves like this.

    This is a £1000 card with no pro drivers,tarted up with lighting which is basically what gamers want but with rubbish performance in games and horrible power consumption.

    We might give AMD some leeway but this is why Nvidia has close to 70% of the discrete market.

    Every bloody launch they have some problem where you need to start playing mental gymnastics to try and say but,but it will be better.

    But the problem is by then Nvidia are not far off launching something better.
    Last edited by CAT-THE-FIFTH; 30-06-2017 at 11:44 AM.

  16. #431
    Senior Member
    Join Date
    Jul 2009
    Location
    West Sussex
    Posts
    1,722
    Thanks
    199
    Thanked
    243 times in 223 posts
    • kompukare's system
      • Motherboard:
      • Asus P8Z77-V LX
      • CPU:
      • Intel i5-3570K
      • Memory:
      • 4 x 8GB DDR3
      • Storage:
      • Samsung 850 EVo 500GB | Corsair MP510 960GB | 2 x WD 4TB spinners
      • Graphics card(s):
      • Sappihre R7 260X 1GB (sic)
      • PSU:
      • Antec 650 Gold TruePower (Seasonic)
      • Case:
      • Aerocool DS 200 (silenced, 53.6 litres)l)
      • Operating System:
      • Windows 10-64
      • Monitor(s):
      • 2 x ViewSonic 27" 1440p

    Re: AMD - Zen chitchat

    I've said it elsewhere and will repeat it here: Vega dGPU is hardly a high priority for AMD. Their priorities must be something along these lines:
    1. Zen as Server (Epyc)
    2. Zen as APU (TBA)
    3. Zen as Ryzen
    4. Other things (Vega dGPU, Threaripper, etc.)

    Thing is even when AMD had the better product (5000 series, arguably most SKUs of the original 7000 series, Hawaii versus 780), they always sold way less than Nvidia. So in terms of return of investment, dGPUs are hardly going to get a large R&D share.
    By far the most important thing with Vega is how well it performs in the Zen APUs and it's unknown whether getting it to do so will be a positive or negative for its performance as a dGPU. The other unknowns is whether RTG made good decisions with their insistence on using HBM2 and also whether they made a part more suited to HPC than gaming. Obviously, they can't afford to make both unlike Nvidia with GP102 and GP100.

    As for 300W, well that sounds like it's running way outside it's sweet spot. Only now with Ryzen do AMD hopefully actually have volumes for their GF wafer agreement, but a big question always why they didn't use GF for GPUs previously. GPUs should be easily to either be wide&slow or narrow&fast, so all these years when they paid the penalty, why didn't they make bigger dies and run them at low clocks. (Hawaii had way better perf/area than GK110, but a larger die running slower would have made more sense; certainly underclocked and undervolted, Hawaii was actually rather efficient.)

  17. #432
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,042
    Thanks
    3,909
    Thanked
    5,213 times in 4,005 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: AMD - Zen chitchat

    Quote Originally Posted by kompukare View Post
    I've said it elsewhere and will repeat it here: Vega dGPU is hardly a high priority for AMD. Their priorities must be something along these lines:
    1. Zen as Server (Epyc)
    2. Zen as APU (TBA)
    3. Zen as Ryzen
    4. Other things (Vega dGPU, Threaripper, etc.)

    Thing is even when AMD had the better product (5000 series, arguably most SKUs of the original 7000 series, Hawaii versus 780), they always sold way less than Nvidia. So in terms of return of investment, dGPUs are hardly going to get a large R&D share.
    By far the most important thing with Vega is how well it performs in the Zen APUs and it's unknown whether getting it to do so will be a positive or negative for its performance as a dGPU. The other unknowns is whether RTG made good decisions with their insistence on using HBM2 and also whether they made a part more suited to HPC than gaming. Obviously, they can't afford to make both unlike Nvidia with GP102 and GP100.

    As for 300W, well that sounds like it's running way outside it's sweet spot. Only now with Ryzen do AMD hopefully actually have volumes for their GF wafer agreement, but a big question always why they didn't use GF for GPUs previously. GPUs should be easily to either be wide&slow or narrow&fast, so all these years when they paid the penalty, why didn't they make bigger dies and run them at low clocks. (Hawaii had way better perf/area than GK110, but a larger die running slower would have made more sense; certainly underclocked and undervolted, Hawaii was actually rather efficient.)
    It's clearly a driver issue - yet AMD decided to launch a prosumer card with shiny lighting and say it will run games but at times performance is terrible.

    This is yet another half baked launch by AMD - it's not even a Pro series card or a Firepro card.

    It's just another typical card launch from them - now they will be in damage limitation mode until the RX Vega is released,but even Ryan Shrout said from what he gathered performance might improve another 10% with drivers.

    Even then the power consumption is horrible.

    Edit!!

    I think AMD should just keep to budget GPUs - they have no clue how to launch or support performance cards and this point how many more months until midrange Volta?

    It's damaging the perception of the whole brand.
    Last edited by CAT-THE-FIFTH; 30-06-2017 at 12:03 PM.

Thread Information

Users Browsing this Thread

There are currently 6 users browsing this thread. (0 members and 6 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •