Page 4 of 8 FirstFirst 1234567 ... LastLast
Results 49 to 64 of 120

Thread: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

  1. #49
    Senior Member
    Join Date
    Sep 2011
    Posts
    264
    Thanks
    4
    Thanked
    8 times in 6 posts
    • tribaljet's system
      • Motherboard:
      • Intel HM65
      • CPU:
      • Intel Core i7-2820QM
      • Memory:
      • 8GB Transcend DDR3-1600
      • Storage:
      • 1TB HGST Travelstar 7K1000
      • Graphics card(s):
      • Intel HD 3000 + Nvidia Geforce GT 555M
      • Operating System:
      • Windows 8.1 Pro 64bits

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Actually you're just being moronic since the moment you've entered this thread, which can be seen on every single post you've made. Be more literate when posting before you act condescending towards others. Clearly you go by the idiocy motto that trolls and other sad excuses for human beings go for on the web, namely "normal person plus anonimity plus audience equals humanity's dump".

  2. #50
    Senior Member
    Join Date
    Jul 2009
    Location
    West Sussex
    Posts
    1,721
    Thanks
    197
    Thanked
    243 times in 223 posts
    • kompukare's system
      • Motherboard:
      • Asus P8Z77-V LX
      • CPU:
      • Intel i5-3570K
      • Memory:
      • 4 x 8GB DDR3
      • Storage:
      • Samsung 850 EVo 500GB | Corsair MP510 960GB | 2 x WD 4TB spinners
      • Graphics card(s):
      • Sappihre R7 260X 1GB (sic)
      • PSU:
      • Antec 650 Gold TruePower (Seasonic)
      • Case:
      • Aerocool DS 200 (silenced, 53.6 litres)l)
      • Operating System:
      • Windows 10-64
      • Monitor(s):
      • 2 x ViewSonic 27" 1440p

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by anselhelm View Post
    Incorrect. An initial assessment of this has shown quite categorically that Mantle doesn't really help that much with average frame rate (5-15%) compared to DX11, though it can help more with minimum frame rates. Furthermore, the Nvidia parts still really outperform the AMD parts for a lower power budget (so generally cooler and quieter).
    But surely minimum frame rates are far more important? I mean which is more likely to affect your enjoyment of a game: that it doesn't quite reach the same peak frame rate or that in certain places the game slows down so much that you really notice?

    Those certain places are of course those areas where the DirectX makes the CPU or the max draw calls the bottleneck anyone looking at what Mantle does would be able to predict that Mantle will do better there.

    In fact, the main problem with Mantle seems to be that AMD don't seem to know how to market it. Selling higher FPSs is the totally wrong way to market Mantle but better marketing better minimums, consistency and frametimes is far harder to market and it seems AMD haven't really tried.

    Well that and the usual people who hunt forums and love seem to think that Intel or Nvidia cannot do anything wrong, while AMD cannot do anything right.

  3. Received thanks from:

    CAT-THE-FIFTH (29-10-2014)

  4. #51
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by kalniel View Post
    I think you missed the point cat was making. Maxwell is only a few steps away from Fermi, which was a power hungry monster. AMDs pitcairn was considered very power efficient for the performance, so even if now nVidia's latest uArch is beating it for perf/watt, AMD should be able to do similar small step changes to compete.

    Ie there's no inherent reason why small step changes won't work for AMD, given how much they did for nVidia.
    Don't think that is quite right.

    The 480 was a power hungry monster, because Nvidia messed up the *implementation* not the architecture. That gave them silicon full of defects that didn't hit the targets, forcing them to up the voltage to get the thing to run at workable clocks and disable compute units that were broken. That isn't the same as the architecture needing tweaking. OTOH, my 460 is still going quite nicely, I think that part shows that even early Fermi was quite capable.

    The 480 and 580 were supposedly very similar designs, just the 580 was fixed. Basically, what the engineers would have wanted to put out as the 480 is there wasn't the usual commercial pressures of releasing new product all the time so you don't go out of business.

    http://www.anandtech.com/bench/product/1135?vs=1350

    So Nvidia fixed a lemon, because they are Nvidia who occasionally come out with lemons as well as the golden parts like the 8800GT. AMD don't tend to have that implementation problem, so without the step change of lemon fixing they have more work to do.

    OFC we are coming up to time for 20nm parts to come out, and this is traditionally where AMD shine and Nvidia stuff their foot in their mouth like they did with the 480.

    Edit to add: Just thought, the R9 280 to R9 285 refresh was pretty impressive. Better performance from the same number of compute elements, with a third of the memory width gone and 50W less power. I think that shows they are serious about updating gcn.
    Last edited by DanceswithUnix; 28-10-2014 at 09:13 PM.

  5. #52
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    No mention of "Why do your Linux drivers still suck " in the questions.

  6. #53
    HEXUS.social member Agent's Avatar
    Join Date
    Jul 2003
    Location
    Internet
    Posts
    19,185
    Thanks
    739
    Thanked
    1,614 times in 1,050 posts

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by HalloweenJack View Post
    Did you honestly just mention that horrendoius mess called `daylight`

    and ok , whilst technically it is UE4 and windows - erm , yeah - I guess you`ve `played` it (meta review of like 40% I think isn't it?)
    Yeah, it's not great. But, it is a UE4 game on the PC that you can buy, so.....

    Quote Originally Posted by HalloweenJack View Post
    Well Thief supports Mantle and is UE3 which means it is more than doable ; Epic I think can see the future here , whats the point of DX12 , when only 1% of pc`s will use it , 18 months after launch? SX11 only games still haven't come out - pretty much all have a DX9 fall back mode , which can only hamper development
    DX11.3 mostly solves that problem. You can dev for DX12, which is the CPU optimised version of DX11.3. If you can't run DX12....well, you can mostly, but slower! (it's linked earlier on in the thread)

    I'm not sure why you think they've "seen the future"? They've publicly stated they do not plan to add it.
    UE4 targets by default: PC, Max, Linux, iOS, Android, Xbox one, PS4 and HTML5 out of the box. Go and compare Mantle support on those platforms.... It doesn't make that much sense right now to add a very low level bit of software like Mantle in. It's immature, only has backing from AMD, and a competitor is in development that would require less change (and will ultimately have a larger user base).
    Engines are written for decades of use. They are not games - it's not always a simple thing.

    I'd be very surprised if they added it given the targeted platforms - it would need a lot more market penetration for it to be a base addition. Games engines are seriously complex, it's more likely they're going to leave it to end developers given that it's pretty bespoke.

    That and it's not on their roadmap.

    edit:

    Quote Originally Posted by HalloweenJack View Post
    doh ofc they wont support mantle
    http://hothardware.com/News/Unreal-E...hip-With-Epic/

    NVidia threw a ****e ton of money at them not to.....
    If Nvidia threw money at Epic to stop Mantle, provide evidence. That's a pretty serious accusation.

    Also: http://www.extremetech.com/computing...vidia-and-epic

    Gameworks is not part of UE4. It was a bit of a misunderstanding of words.
    Last edited by Agent; 28-10-2014 at 10:16 PM.
    Quote Originally Posted by Saracen View Post
    And by trying to force me to like small pants, they've alienated me.

  7. #54
    Senior Member
    Join Date
    Mar 2010
    Posts
    2,567
    Thanks
    39
    Thanked
    179 times in 134 posts

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    http://blogs.nvidia.com/blog/2014/03/19/epic-games/

    ^^ from nv themselves.

    Together with Epic, we’ve incorporated support for NVIDIA GameWorks directly into Unreal Engine 4 making it easier for UE4 licensees to take advantage of our technology. NVIDIA Gameworks libraries are designed to help developers create a wide variety of special effects, such as more realistic clothing or destruction, and now these effects are available to every developer with a UE4 license - See more at: http://blogs.nvidia.com/blog/2014/03....xYZUD6u7.dpuf
    Last edited by HalloweenJack; 28-10-2014 at 09:58 PM.

  8. #55
    Senior Member
    Join Date
    Mar 2010
    Posts
    2,567
    Thanks
    39
    Thanked
    179 times in 134 posts

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    dbl post

  9. #56
    HEXUS.social member Agent's Avatar
    Join Date
    Jul 2003
    Location
    Internet
    Posts
    19,185
    Thanks
    739
    Thanked
    1,614 times in 1,050 posts

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by kompukare View Post
    Actually the reason that UE3/4 will probably have Mantle support is that

    1) coding for consoles and Mantles will be very similar and all the main engine providers have to support the consoles first and PCs second; just look at Ryse where CryEngine seems to have been very much optimised for AMD's GCN in terms of the kind of compute and lighting effects used.
    UE3 is pretty much dead development wise, unless you have an existing licence / support with Epic. The push to UE4 started ages ago during a beta called 'Rocket' which was closed unless invited.

    This is the key point though - Mantle needs market penetration. If it hit consoles then you're much more likely to get it integrated into an engine like UE4. That and it needs to be a proven, supported product by AMD.
    Epic are pretty serious about UE4 stability and they will want proof of this before integration.

    http://www.dualshockers.com/2014/03/...-both-further/

    If Sony / MS take it up, we'll see, but I suspect they'll do their own versions if needed. The improvement ability on a console though is very debatable though, as the existing code is already running very close to the metal.

    Also keep in mind that developers that can code at this level are *expensive*. Dropping in a new API is often met with resistance. People like to know the way things work, and change can be difficult - no matter how much AMD (or anyone else) would like to say its easy. These codebases will often have hundreds of thousands, maybe millions of lines of code. Tried and tested is important.

    Quote Originally Posted by kompukare View Post
    What would be very disruptive would be Mantle on Linux. SteamOS with Mantle with an easy way to port from console could see a rapid expansion of Linux gaming and prevent Microsoft from trying to force Windows sales with DX12 like they did with Windows Vista and DX10.
    Mantle on Linux / SteamOS actually makes a lot of sense. Mantle really does help take out the CPU bottleneck in some situations. This means you can start pitching quite efficient, optimised SteamOS boxes at a low price.

    The issue then though is getting the games to support it - which ultimately will still be built on an engine if they are of any complexity.

    Quote Originally Posted by HalloweenJack View Post
    crisis 3 is DX9 / DX11 as it runs on xbox 360, whether they want to tell you that or not cryengine 3 has to be DX9 compatable to run on the older console , change the render in the ini file to use the old renderer which is still part of the game

    ergo , its not a ground up DX 11 game
    The key word is compatible. A game can be 'DX11' from the ground up, but have a DX9 compatible code path. DX11 features either simply get dropped, or emulated where appropriate.
    I've got to stress though that DX9 was a bloody good release. DX10 and 11 were nice, but no where near as game changing as DX9 was at the time.

    I think you're thinking about this the wrong way. Having a 32bit code path so you can run old applications on a Windows 64bit based system is not a bad thing. Nor is having a DX9 codepath in a game / engine.

    Quote Originally Posted by HalloweenJack View Post
    strider? game only `needs` DX11 for the opening screen - you can force a bypass and the game itself runs on a GT220 (albeit as well as anything can run on a GT220) - so its a DX10 maybe a DX9 game...
    But why does it matter?
    If they need DX11 for the title screen - so be it. If they didn't then fine.

    The DX revision is just the API the developers are using. It doesn't make a better game. This mindset of having a 'pure' DX11 game is wrong. You don't build most of the assets for the game any differently.
    There is little point in locking a game to a higher DX version when you can run it on lower ones, turn things off, and potentially get more sales.

    Quote Originally Posted by HalloweenJack View Post
    daylight - being a slenderman type game , and not a good one - well yeah for that title (and why is it UE4 anyway??)
    Why is X game Y engine, at all?

    The developers probably knew UE and used it. UE4 is very fast to develop in.

    Quote Originally Posted by HalloweenJack View Post
    so yes ok that's a DX11 title.
    Because it locks you to a specific version when installed? What about if it uses no DX11 features and runs a DX9 codepath with a DX11 requirement set?
    Again - it's the wrong mindset you're in.

    Quote Originally Posted by HalloweenJack View Post
    any game that can run on PS3 or xbox 360 has to have a DX9 (or equivalent in ogl) renderpath , so it cannot be called a DX11 only game
    Calc.exe on 64bit Windows is a 32bit application, therefore Windows can't be called a 64bit OS.

    HJ....please understand this....a renderpath is just that. It's the route the game takes to output things to your screen (in simple terms). This entire "cannot be called a DX11 only game" mindset because it also has a DX9 path makes absolutely no sense.

    If there is something about games engines you wan't clarifying (I have over a decade of UE experience in a commercial capacity) I'm happy to explain where I can - but ask, as some of the things you're saying don't make that much sense right now.
    Quote Originally Posted by Saracen View Post
    And by trying to force me to like small pants, they've alienated me.

  10. #57
    HEXUS.social member Agent's Avatar
    Join Date
    Jul 2003
    Location
    Internet
    Posts
    19,185
    Thanks
    739
    Thanked
    1,614 times in 1,050 posts

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by HalloweenJack View Post
    Check the dates. Read my link.

    If you're in any doubt, get the UE4 source code and look yourself. Gameworks is not in UE4. Period.

    Here is another link, from one of the guys I know who works for Epic: https://forums.unrealengine.com/show...ll=1#post60203

    I just wanted to clear up a few things!

    UE4 does _not_ have Gameworks "built into its core". We do use PhysX at the core of UE4, but that is a cross-platform, CPU-focused, rigid-body and collision engine, and we work closely with NVIDIA to ensure it runs well on all platforms we port to. Currently in UE4 NO PhysX or APEX feature uses the GPU.

    I'm very excited about some of the simulations NVIDIA have shown on a GPU, and I'd love to make them standard features in the engine. We have a great relationship with NVIDIA going back many years, and we are talking to them about a way to bring that tech to UE4 in a cross-platform way. We have no plans to implement them until we come up with a solution to that problem though.
    Even if "Gameworks", which is just marketing for several bits of tech, was 'core' to UE4 - it doesn't matter. It would be cross platform and not locked down like existing stuff in UE4. You get the UE4 source code with a licence, so it's not like it would even be an issue...

    If there is something you'd like clarifying, then please just ask...
    Last edited by Agent; 28-10-2014 at 10:21 PM.
    Quote Originally Posted by Saracen View Post
    And by trying to force me to like small pants, they've alienated me.

  11. #58
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by shaithis View Post
    Did Hexus post a complete BS interview with him in the news section while he was working for nVidia? Because I don't remember one.......and tbh, the hyperbole in this article is incredible. By all means "tout your warez" but to just talk like he does is plain misleading:

    Mantle, TrueAudio and Eyefinity give the best gaming experience? Oh really?
    Mantle unlocking it's latent potential? All is does is reduce CPU load.
    Implying the 900 series are essentially nVidias first decent cards....
    Implying games like Civilization: Beyond Earth and Dragon Age: Inquisition run at top performance only on AMD graphics.
    Citing the one internet source that reckons CF is better then SLI. SLI bridges "archaic".....erm, does anyone care if the bridge is there or not? Is that really going to make anyone with half a brain cell change their purchasing decision?
    Talking about the new consoles like it's a good thing.....IMO I think the new console ports have been ruined due to (justifiably) heavy use of HSA, which we don't have on gaming PCs....making a lot of ports with streaming worlds terrible. Thanks for that AMD!

    But you do get the free games, which you can buy for 5-10 quid due to people dumping the keys like crazy.
    Where were you complaining when Hexus was posting all the stuff about Nvidia tech which was basically PR bumpf?? No where.

    Where all the criticism about the Nvidia effects?? Turf effects,etc?? Nvidia PR saying it makes the games XYZ better and so on??

    Also,do you really want to start arguing with me about Nvidia PR and their sideswipes at the competition??

    You want me to start listing them year by year for you??

    How about these childish cartoons by Nvidia PR in 2009 taking swipes at Intel??

    http://www.tomshardware.com/news/nvi...ides,9015.html











    Nvidia PR is renowned for is agressiveness - just like Apple.

    AMD has as much right to put out as much PR bumpf crap as Nvidia has done for years.

    Oh wait! Only Nvidia is allowed to say anything they want.

    Edit!!

    Look at Nvidia criticising AMD/ATI:

    http://news.softpedia.com/news/Nvidi...ds-78793.shtml

    Jen-Hsun Huang, the president and chief executive officer of Nvidia told financial analysts that AMD's Radeon HD 3870 X2 cannot be the highest-performing offering on the graphics market. Moreover, Huang criticized the very idea of joining two graphics processing cores on a single card. Nvidia seems to believe that the classical, single-chip approach pays off best when it comes to high-end graphics products.
    Nvidia NEVER smack talks competitors products ever!! In fact they never made a dual GPU card,right?

    Oh wait a second....!

    So honestly,if you don't care about PR ignore it and don't even attempt to comment on just one company doing it,unless you feel the need to do so for ALL companies in a particular sector.

    Every single company is doing it - bigging up their products and making the competition seem worse.

    Honestly at this point,I really shouldn't bother arguing with you as we will just go in circles just like the times before.
    Last edited by CAT-THE-FIFTH; 29-10-2014 at 01:26 PM.

  12. #59
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by Agent View Post

    I fail to see how Shaithis points are moot because Mantle is out now? It's a perfectly valid point that is being made.
    His point is moot though - you need to do some more research about people using Mantle. There are people with older CPUs getting decent framerates in games like BF4 online,and those people have posted their own experiences on forums like OcUK. People with older Core2 quads,Phenom II CPUs and so on,who have managed to make do with a GPU upgrade only. Even look at Thief:

    http://media.bestofmicro.com/L/L/427...Mantle-Mid.png
    http://oi59.tinypic.com/2i6gsxk.jpg
    http://oi61.tinypic.com/ngq7io.jpg

    The cards tested are all under £150 - look at the scores for the Core2 quad and the sub £70 AMD quad cores.

    You should realise by now,not everyone has top end CPUs,so Mantle does have some advantages and they are more of use to the large percentage of realworld gamers.

    OFC,not for hardware enthusiasts with their £150+ latest CPU and £400 cards.

    Plus it is here now and in games now and its going to be in games which have been announced NOW.



    DX12 games are still incoming and there is no solid date apart from the middle of 2015,with no actual game announced at all last time I checked.

    But OFC,like with the Fermi delays,lets just say DX12 is coming and coming and that instantly makes Mantle pointless. Didn't we see that with Rollo??

    For an API which is "not of any importance" it does appear those who are the most vocal critics of it are people who will never really use it.

    Remember he says this for example to me:

    Citing the one internet source that reckons CF is better then SLI.SLI bridges "archaic".....erm, does anyone care if the bridge is there or not? Is that really going to make anyone with half a brain cell change their purchasing decision?
    One of those sources was PCPER,who developed FCAT with Nvdia,but he cannot still accept that AMD might have worked enough on XFire to make it pretty decent by now. But I bet he also did not pick up on Nvidia quoting PCPER when FCAT was first released showing "how superior" they were.



    Statements like this:

    Mantle, TrueAudio and Eyefinity give the best gaming experience? Oh really?
    Nvidia makes similar claims about their tech too.

    Quote Originally Posted by Agent View Post
    Not quite sure where you're going with this, but I think Shaithis is quite accurate in what he's saying. There is no way a 980 should be performing worse than a 290X in most situations - for the situation he has quoted, you'd need a pretty specific setup for it to be a problem. He's pointing out one of the situations where that could be the case - if anything, that's being more than fair by trying to show how it could have happened. It's all well and good showing Mantle , but that's only of use if your titles support it (and Mantle offers an advantage) and its bug free. It's not like BF didn't have major issues with Mantle for a while...
    Yes,because Nvidia would NEVER do the same would they.....!?

    PR quoting best case scenarios.

    Shock and Horror!



    Quote Originally Posted by Agent View Post

    Which is awesome - but if you don't play them? If you're keeping a GPU for a while, new games will be released too. DX / OpenGL support is a given. Mantle is not.
    What if you do though?? If there is a game which you want to play and it has Mantle support then I don't see why you should ignore it??

    No different than Nvidia users saying they want to buy a Nvidia card for PhysX effects or CUDA.

    Quote Originally Posted by Agent View Post
    Because nvapi is an entirely different thing (it's a driver interface framework). It's not something to use for rendering really. Mantle is.
    NvAPI has been used in games btw, just not in the way you're thinking.
    NVAPI didn't run on AMD cards or Intel IGPs too,so what??

    Yet since Nvidia was using it thats fine and gave them a nice performance bump for the GTX580 and GTX680 over the HD6000 and HD7000 series cards. So if Mantle does the same,pfft???

    Plus since the said Mantle games have DX11 paths currently(and probably DX12 ones in the future),I don't see why there is this negativity.

    Don't like it?? Don't use it.

    Simples.

    Quote Originally Posted by edzieba View Post
    Not 450w, 300w. The Silverstone 450w SFX PSU has an unusually nasty fan inside, to the point that people were replacing them with slimline Noiseblockkers and taking the hit in maximum draw before thermal shutdown.
    BTW,since I have used NOTHING but SFF PCs since late 2005,I really am just a "tad" clued up about SFF computers!

    Again,maybe you should check the power consumption characteristics of the GTX970 and GTX980 cards. They are closer to GXT670/GTX680/GTX770 level and according to the dynamic power measurements from TH can exceed TDP and power consumption of such cards too.



    Quote Originally Posted by edzieba View Post
    That was exactly my point. With Maxwell, Nvidia can keep pushing power and performance up further (people are overclocking 970s to sustained max TDP without overvolting) with little to no effort for another generation. AMD need a major architecture overhaul just to get down to a competing performance/watt. That puts them on the back foot when it comes to yeilds for new chip designs, and low yeilds = high costs, something AMD cannot afford at the moment.

    What you fail to realise repeatedly,is that Kepler was based on the "power hungry" Fermi. Maxwell was based on Kepler with a sharing of texture units and greater power,TDP and voltage control granuality.

    So are you honestly saying that GCN cannot be evolved in the same way??

    Really??



    So again,I would advice you to look at some of the tech Nvidia implemented and look back at the slide above.

    Not doomed, but they're not in a good position. In the CPU market, AMD fell dramatically from way ahead to far behind, and the market as a whole has been worse for it with Intel uncontested in absolute performance and in performance/watt. To have the same happen in the graphics market would be bad for everyone.
    People say that all the time. See my next post.

    Quote Originally Posted by DanceswithUnix View Post
    Don't think that is quite right.

    The 480 was a power hungry monster, because Nvidia messed up the *implementation* not the architecture. That gave them silicon full of defects that didn't hit the targets, forcing them to up the voltage to get the thing to run at workable clocks and disable compute units that were broken. That isn't the same as the architecture needing tweaking. OTOH, my 460 is still going quite nicely, I think that part shows that even early Fermi was quite capable.

    The 480 and 580 were supposedly very similar designs, just the 580 was fixed. Basically, what the engineers would have wanted to put out as the 480 is there wasn't the usual commercial pressures of releasing new product all the time so you don't go out of business.

    http://www.anandtech.com/bench/product/1135?vs=1350

    So Nvidia fixed a lemon, because they are Nvidia who occasionally come out with lemons as well as the golden parts like the 8800GT. AMD don't tend to have that implementation problem, so without the step change of lemon fixing they have more work to do.

    OFC we are coming up to time for 20nm parts to come out, and this is traditionally where AMD shine and Nvidia stuff their foot in their mouth like they did with the 480.

    Edit to add: Just thought, the R9 280 to R9 285 refresh was pretty impressive. Better performance from the same number of compute elements, with a third of the memory width gone and 50W less power. I think that shows they are serious about updating gcn.
    The second generation Fermi parts had better power containment features and the TSMC 40NM process was better yielding by then.



    People forget that the GTX580 was not massively better off than the GTX480 in regards to power consumption,but yields were better by then so they could enable the whole GPU which meant better performance,and with the better TDP confinement mechanisms,meant performance/watt improved.

    But,I find it also quite funny that people don't realise that even a "power hungry" R9 290X or GTX780TI is consuming LESS power than a GTX570 or GTX580 in most cases. It just shows you that 28NM has seen a decent decrease in card power consumption overall.
    Last edited by CAT-THE-FIFTH; 29-10-2014 at 04:34 AM.

  13. #60
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by kompukare View Post
    Well that and the usual people who hunt forums and love seem to think that Intel or Nvidia cannot do anything wrong, while AMD cannot do anything right.
    Agreed.

    Looking at the last few years of GPU releases.

    2002-2003 ATI is doomed due to the cancelation of the 8500XT and Nvidia having the TI4200 and TI4600

    2003-2004 Nvidia is doomed since the R300 is first to DX9 and faster than the FX series.

    2004-2005 ATI is doomed since the 6000 series Nvidia cards support DX9C and they don't. Nvidia is lower power consumption.

    2005-2006 Shifts between ATI is doomed and Nvidia is doomed.

    2006-2007 ATI is doomed as the G80 and G92 have better performance and lower power consumption than the HD2000 and HD3000 series

    2008-2009 ATI is first doomed since they having nothing to compete with the GT200. Then AMD launches the R700,and now Nvidia is doomed as Nvidia has to price cut huge chips against AMD ones which are smaller

    2009 - 2010 Nvidia is doomed since ATI/AMD is first to DX11,has lower power consumption and smaller dies than the GTX400 series. Although soon with the GTX460 series,AMD is doomed.

    2010 -2011 AMD is partially doomed since they don't have the fastest card anymore,and Nvidia has better tessellation

    Late 2011 to early 2012. AMD is doomed due to the HD7970 and even more doomed with the GTX680.

    Late 2012 to middle 2013. AMD is doomed as they have nothing to compete with once the Geforce Titan and GTX780 are released

    Late 2013. AMD is still doomed with the R9 290 and R9 290X releases,since Nvidia quickly launched the GTX780TI and the AMD cards have black screens,throttle and run too hot and explode all the time.

    Late 2012 to 2013. Nvidia is doomed due to AMD winning console contracts.

    Early 2014. AMD is doomed due to the GTX750TI

    Late 2014. AMD is doomed due to the GM204.

    Potential next doom point - Nvidia releases 20NM GM200/GM210 in small quantities at £1000 and even if AMD has the fastest card in the R9 390X at £500 before then is still doomed.

    And so on.

    When it comes to GPU releases people seem to have very short memories indeed. At every launch the standard stuff happens.Hilarious.

    Its so predictable,so much so I predicted the response to this launch months before it came out. I believe I mentioned it here or on OcUK or some other forum.

    Funnily enough I got quite good at predicting the response to Apple and Android launches for a few years too.

    Last edited by CAT-THE-FIFTH; 29-10-2014 at 03:49 AM.

  14. #61
    sirhobbes3
    Guest

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    I wouldn't say unsurpassed. While AMD isn't completely beat, i'd say nVidida has the edge on most things. Still would use AMD in a pure budget build for someone because that new Pentium line from Intel just won't cut it for me when i can get a APU from AMD that will do much better.

  15. #62
    Senior Member
    Join Date
    Jan 2009
    Posts
    342
    Thanks
    0
    Thanked
    27 times in 23 posts

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by CAT-THE-FIFTH View Post
    BTW,since I have used NOTHING but SFF PCs since late 2005,I really am just a "tad" clued up about SFF computers!

    Again,maybe you should check the power consumption characteristics of the GTX970 and GTX980 cards. They are closer to GXT670/GTX680/GTX770 level and according to the dynamic power measurements from TH can exceed TDP and power consumption of such cards too.
    Right, let's actually look at that TH article, shall we?
    The gaming-based power consumption numbers show just how much efficiency can be increased if the graphics card matches how much power is drawn to the actual load needed.
    The values above have potential consequences for the everyday operation of these graphics cards, as they represent what can be expected when running performance-hungry compute-oriented applications optimized for CUDA and OpenCL.
    And on the next page (after a disclaimer that previous results from a modified non-reference card have been removed due to being non-representative):
    When it comes down to it, it's possible for our most taxing workloads to take Maxwell back to Kepler-class consumption levels. In fact, Gigabyte's factory overclocked GeForce GTX 980 actually draws more power than the GeForce GTX Titan Black without offering a substantial performance gain in return. As you can see below, the reference GeForce GTX 980 draws substantially less power, though.
    The tl;dr being: Compute workloads bring power consumption up to max TDP. Gaming loads do not. Overclocked cards use a lot of power (duh), reference cards do not.

    Unless you are aiming for a compute workstation or have an overclocked non-reference card (with a higher rated power consumption), power draw will be significantly lower when gaming.


    Now, AMD certainly have the opportunity to implement more rigorous power gating for a Tonga successor. But that just brings them down closer to Maxwell territory. In order to significantly surpass that, they need a new architecture, or a new process, or both. And we all know that changing architecture AND process is a recipe for low yields and lower than expected performance (e.g. early Fermi).

    Looking at TH's own GPU efficiency chart from that same article, you can see the challenge AMD has to produce a dramatic increase in efficiency just to remain competitive with current parts.

    There's no one insurmountable problem AMD need to overcome, but the combination of moving to a new process, a long time without an architecture refresh (or even longer if they roll out a whole line based on Tonga) and the need for at a minimum performing significant power modifications to Tonga? Something has to give. Doing everything at once will be a big gamble, and TSMC is continuing to have trouble getting 20nm big-chip yeilds up to an acceptable level. AMD deal in volume, producing a few golden-samples at massive cost is dangerous with their current finances.


    Remember also, TDP isn't only relevant in mobile and volume-constrained scenarios; for multiple generations now, top-end GPUs have been constrained by the 300w PCI-E max power consumption limit. Over-limit cards have been produced, but those sell only to end users. Pre-build OEMs are very reluctant to install out-of-spec parts due to the increased support liability (you not only need to pick one out-of-spec part, but need to demand assurances from manufacturers of all interacting parts that operating alongside an out-of-spec part will not void their warranties too, which costs a lot more), which cuts out a large market segment.

    It should be obvious but AMD is not 'better' than Nvidia, and Nvidia is not 'better' than AMD. A healthy competition between broadly equivalent parts is needed to keep prices for consumers down and drive for development up. But sticking your head in the sand and ignoring the realities of semiconductor development helps no-one.

  16. #63
    spl
    spl is offline
    Member
    Join Date
    Jun 2013
    Posts
    181
    Thanks
    15
    Thanked
    8 times in 8 posts

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Pfffffffffffffffft. In other news, AMD learns trolling.

  17. #64
    Moosing about! CAT-THE-FIFTH's Avatar
    Join Date
    Aug 2006
    Location
    Not here
    Posts
    32,039
    Thanks
    3,910
    Thanked
    5,224 times in 4,015 posts
    • CAT-THE-FIFTH's system
      • Motherboard:
      • Less E-PEEN
      • CPU:
      • Massive E-PEEN
      • Memory:
      • RGB E-PEEN
      • Storage:
      • Not in any order
      • Graphics card(s):
      • EVEN BIGGER E-PEEN
      • PSU:
      • OVERSIZED
      • Case:
      • UNDERSIZED
      • Operating System:
      • DOS 6.22
      • Monitor(s):
      • NOT USUALLY ON....WHEN I POST
      • Internet:
      • FUNCTIONAL

    Re: Features - Roy Taylor: AMD Radeon GPUs remain unsurpassed

    Quote Originally Posted by edzieba View Post
    Right, let's actually look at that TH article, shall we?


    And on the next page (after a disclaimer that previous results from a modified non-reference card have been removed due to being non-representative):


    The tl;dr being: Compute workloads bring power consumption up to max TDP. Gaming loads do not. Overclocked cards use a lot of power (duh), reference cards do not.

    Unless you are aiming for a compute workstation or have an overclocked non-reference card (with a higher rated power consumption), power draw will be significantly lower when gaming.
    Lets look at the article again.Firstly flat out the TDP contain mechanism pushes TDP to a much higher level which means under compute workloads big Maxwell will not be as efficient.

    What is even funnier,is that you fail to realise on all levels that both the GK110 and Hawaii have MASSIVELY higher DP compute performance and those large memory buses and loads of RAM chips are there for certain GPGPU operations which are memory bandwidth limited. So you are comparing a stripped down gaming part with parts which have additional functionality plus Hawaii has audio DSPs consuming more power too. People like you did the same with Tahiti vs the GK104 too.






    Quote Originally Posted by edzieba View Post

    Now, AMD certainly have the opportunity to implement more rigorous power gating for a Tonga successor. But that just brings them down closer to Maxwell territory. In order to significantly surpass that, they need a new architecture, or a new process, or both. And we all know that changing architecture AND process is a recipe for low yields and lower than expected performance (e.g. early Fermi).

    Looking at TH's own GPU efficiency chart from that same article, you can see the challenge AMD has to produce a dramatic increase in efficiency just to remain competitive with current parts.

    There's no one insurmountable problem AMD need to overcome, but the combination of moving to a new process, a long time without an architecture refresh (or even longer if they roll out a whole line based on Tonga) and the need for at a minimum performing significant power modifications to Tonga? Something has to give. Doing everything at once will be a big gamble, and TSMC is continuing to have trouble getting 20nm big-chip yeilds up to an acceptable level. AMD deal in volume, producing a few golden-samples at massive cost is dangerous with their current finances.


    Remember also, TDP isn't only relevant in mobile and volume-constrained scenarios; for multiple generations now, top-end GPUs have been constrained by the 300w PCI-E max power consumption limit. Over-limit cards have been produced, but those sell only to end users. Pre-build OEMs are very reluctant to install out-of-spec parts due to the increased support liability (you not only need to pick one out-of-spec part, but need to demand assurances from manufacturers of all interacting parts that operating alongside an out-of-spec part will not void their warranties too, which costs a lot more), which cuts out a large market segment.

    It should be obvious but AMD is not 'better' than Nvidia, and Nvidia is not 'better' than AMD. A healthy competition between broadly equivalent parts is needed to keep prices for consumers down and drive for development up. But sticking your head in the sand and ignoring the realities of semiconductor development helps no-one.
    Lets look at the gaming power consumption figures in more detail though.





    The same was noted with the GTX750TI(GK107) since TH uses very high speed measuring equipment - which again does not negate what I said before,or this slide you on purpose keep ignoring:

    http://images.anandtech.com/doci/797...8-AM_575px.jpg



    Maxwell is very peaky in power consumption but any body who had bothered to keep up to date would have known this for ages.

    I keep many CPU and GPU rumours and info threads up to date on here and OcUK,so actually bother to read a lot of the bumpf AMD,Nvidia and Intel release.

    What you fail to realise like with AMD,Nvidia has been developing a whole load of similar tech which is going into their SOCs,just like AMD. You are stuck in a microcosm without seeing the bigger picture at all.

    AMD is at a massive process node disadvantage to Intel,so it needs to develop similar tech for its APUs too.

    Read that slide again and read what Maxwell has.

    Plus trying to sound cool by saying AMD needs a new uarch all the time is funny.

    The last MAJOR uarch change from Nvidia was the GT200 to Fermi jump - the last for AMD was the jump from VLIW4/VLIW5 to GCN.

    Or do you want me to explain each of the last few generations to you??


    Quote Originally Posted by CAT-THE-FIFTH View Post
    Agreed.

    Looking at the last few years of GPU releases.

    2002-2003 ATI is doomed due to the cancelation of the 8500XT and Nvidia having the TI4200 and TI4600

    2003-2004 Nvidia is doomed since the R300 is first to DX9 and faster than the FX series.

    2004-2005 ATI is doomed since the 6000 series Nvidia cards support DX9C and they don't. Nvidia is lower power consumption.

    2005-2006 Shifts between ATI is doomed and Nvidia is doomed.

    2006-2007 ATI is doomed as the G80 and G92 have better performance and lower power consumption than the HD2000 and HD3000 series

    2008-2009 ATI is first doomed since they having nothing to compete with the GT200. Then AMD launches the R700,and now Nvidia is doomed as Nvidia has to price cut huge chips against AMD ones which are smaller

    2009 - 2010 Nvidia is doomed since ATI/AMD is first to DX11,has lower power consumption and smaller dies than the GTX400 series. Although soon with the GTX460 series,AMD is doomed.

    2010 -2011 AMD is partially doomed since they don't have the fastest card anymore,and Nvidia has better tessellation

    Late 2011 to early 2012. AMD is doomed due to the HD7970 and even more doomed with the GTX680.

    Late 2012 to middle 2013. AMD is doomed as they have nothing to compete with once the Geforce Titan and GTX780 are released

    Late 2013. AMD is still doomed with the R9 290 and R9 290X releases,since Nvidia quickly launched the GTX780TI and the AMD cards have black screens,throttle and run too hot and explode all the time.

    Late 2012 to 2013. Nvidia is doomed due to AMD winning console contracts.

    Early 2014. AMD is doomed due to the GTX750TI

    Late 2014. AMD is doomed due to the GM204.

    Potential next doom point - Nvidia releases 20NM GM200/GM210 in small quantities at £1000 and even if AMD has the fastest card in the R9 390X at £500 before then is still doomed.

    And so on.

    When it comes to GPU releases people seem to have very short memories indeed. At every launch the standard stuff happens.Hilarious.

    Its so predictable,so much so I predicted the response to this launch months before it came out. I believe I mentioned it here or on OcUK or some other forum.

    Funnily enough I got quite good at predicting the response to Apple and Android launches for a few years too.


    I will quote my last post again for you.

    The hilarious doom and gloom cycles on the internet. Makes me laugh.

    At every new GPU launch you can see people emerging from the woodwork making the same grandiose claims.

    Its something I expect every year.

    I bet you thought Nvidia was doomed when Fermi was first released?? ROFL.



    So you can keep peddling your doom mongering repeatedly,but don't expect me to agree with you since history says another thing it appears.

    So we can agree to disagree and keep it at that. I am getting a bit fed up having to repeat the same stuff again and again now.
    Last edited by CAT-THE-FIFTH; 29-10-2014 at 12:31 PM.

Page 4 of 8 FirstFirst 1234567 ... LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •