Read more.VideoCardz has published an exclusive support list with many upcoming AAA titles.
Read more.VideoCardz has published an exclusive support list with many upcoming AAA titles.
I don't have much experience with Mantle. But in BF4, it causes me more problems than anything I used it in Thief. But I used Mantle the whole time, so not sure what difference it made Not even sure I've used it in any other games.
But there is one big, potential problem I could see with Mantle. If it works out as a really good Api a little further down the road. I think it's very likely that we'll be getting lower spec graphics cards for the money - Less bang for buck (or equal bang for buck when using Mantle). Seeing as we can do more with the power we have. It will still be awesome having a more optimised Api. But while it may seem to squeeze a bit more life out of our current hardware, later on it'll probably just be like nothing much changed, as far as the performance we get for our money is concerned. Hope I'm wrong!
So, as it stands with relatively budget minded gaming ( being my mind set at the moment ) I pose this question.
If someone wanted to get the best performance from a CPU in a budget minded system what would peoples recommendations be:
1) Cheap AMD quadcore - More cores
2) The Intel Pentium anniversary edition overclocked super high. - More single thread performance
It's a single threaded performance vs multithreading. And with Mantle take up being decent it makes you wonder. Where is best to spend money?
Steam - ReapedYou - Feel free to add me!!
It depends on what games they want to play. Some games will be better on the Pentium and others will be better on the AMD chips.
If it were a general purpose gaming system it would hinge on one question:
"Would they be willing to upgrade the CPU at all and if so in the next 18 months??"
If not,then I would go with the AMD CPU,although people should not forget the Haswell Core i3 either.
The Pentium dual core is not well balanced IMHO,and it does not help a number of reviews,are benching it under favrourable(if not slightly interesting) conditions to make it look a tad better.
Even then it does not change the fact that even in many supposedly lightly threaded games a Haswell Core i3 still matches or exceeds an overclocked Pentium dual core in many instances,or simply produces enough framerates and has decent frametimes to give a good performance.
Yet the HT on the Core i3 gives it enough legs to help in more threaded games.
One benchmark which annoys me is Skyrim. Its a third person perspective single player game,yet the review E-PEEN when one CPU is producing 100FPS and one producing 80FPS is just hilarious.
Its even funnier when many players mod the game massively,meaning the graphics card is more a limitation in many cases,and many of the reviews don't test with the most popular mods enabled either.
I increasingly get the impression many reviewers have not actually played the games they benchmark. An example is Crysis3 - parts are lightly threaded in the SP campaign but others will push all 4 cores of a Core i5 to the maximum and even use HT effectively enough for a Core i7 to destroy a Core i5. MP is more of the latter too,and its why I got a Xeon E3 in the first place.
People as a result read these reviews,and get a disjointed view of actual performance in these games,and what is actually needed.
They might find simply upgrading their old Core2 or Phenom II rig with a new graphics card does the job.
Last edited by CAT-THE-FIFTH; 11-07-2014 at 12:33 PM.
Or get an older chip like the Xeon X5650 - intel Hex core, hits 4.2 GHz with all power saving still on and within intel spec voltage - for only £90, a socket 1366 m/board for about same and as big a gfx card as you can afford. Best of all worlds
No, I think you have that backwards.
While there certainly are some system specs where Mantle helps low-end systems, Mantle makes far more sense where the GPU is being held back by the CPU. Scenarios like these:
- AMD quad core with R9-270X or R9-280
- Intel i3 with R9-270X or R9-280
- AMD FX6300 or FX8350 with R9-280X or R9-290X
- Intel i5 non-K with R9-280X or R9-290X
- Intel i5/i7 K with crossfired R9-280X, R9-290 or R9-290X
- AMD FX7350 with crossfired R9-280X, R9-290 or R9-290X
- i7 LGA2011 with tri-fire R9-290 or R9-290X
The point being: anywhere where the CPU might be holding back the GPUs potential. It does not make sense with something like a fast i7 paired with just one R9-280.
That's why in BF4 it is heavy load multiplayer (which is next to impossible to benchmark) where the big gains are to be found.
Mantle lower the CPU overhead, or rather the CPU spends less time organising the draw calls. It does not make the GPU faster unless the GPU spends time waiting for CPU. What it does do even in GPU limited scenarios is make the frame rates more consistent so better min frames (which are far more important than max frames - just a pity so few review sites realise this).
Of course, since the GPU spends less time waiting on the CPU power consumption may go up as the GPU doesn't get a chance to idle.
Pleiades (12-07-2014)
All well and good. Can AMD now focus on cooler cards that won't fry the rest of my system. That way there might be a chance I'd opt for them over Nvidia next time and maybe have half a chance of mantle becoming a purchase consideration.
CIV:BE and Elite: Dangerous are reasons to go AMD for me at least.
Edit -Star Citizen as well.
Last edited by The Hand; 11-07-2014 at 01:42 PM.
Well, yes more or less. Mantle doesn't make the GPU faster unless in some scenarios where DirectX is the limit. Of course the current stuff (BF4, Thief, PvsZ) had Mantle added very late in the development. But games which were designed around Mantle are more likely to have more units onscreen, better AI etc. (that is things which use the CPU).
While Nvidia's card have a bit better perf/watt, the difference is hardly that great (aside from 750Ti). These are taken from TPU's power chart (TPU measure only the card, not the whole system):
660Ti 137W vs 270X 122W 770 180W vs 280X 208W 780 220W vs 290 263W 780Ti 268W vs 290X 294W
Some differences for sure, but nothing that would 'fry the rest of my system'. The telling thing is that if you look at TPU's other chart, their 'Performance per Watt' chart, prior to the 750Ti coming out the difference between the worst and the best card 28nm is 81% to 115%. So 115/81 = 1.42 which shows that there isn't any great magical difference. But yes, certainly AMD could improve and that probably means cutting out DP performance in their gaming card like Nvidia do, plus more power gating and some die binning (which AMD don't seem to do much).
http://www.techpowerup.com/reviews/S...i-X_OC/25.html
Last edited by kompukare; 11-07-2014 at 02:04 PM. Reason: Changed to Cat's 280X figures
Those R9 280X figures are for an overclocked Gigabyte card against a stock GTX770. For instance TPU tested this MSI card:
http://www.techpowerup.com/reviews/M...g_6_GB/22.html
kompukare (11-07-2014)
Right, changed the figures. But I think a lot of people get confused that because the reference 290X was allowed to go up to 94°C it generated more heat.
Core temperature is not heat output.
A card which consumes 260W and has a core temp of 60°C and one which consumes 260W but has a core temp of 94°C will generate the same amount of heat (obviously different card design may dump more or less heat inside: reference blower type cards may be noisy but they tend to push heat outside the case unlike quieter aftermarket open-air coolers)
There are currently 1 users browsing this thread. (0 members and 1 guests)