Read more.Quote:
Ryzen CPU grunt, Vega GPU prowess in one chip.
Printable View
Read more.Quote:
Ryzen CPU grunt, Vega GPU prowess in one chip.
I am sad to see that Hexus showed much more "discrete gaming" than IGP, knowing all good that these CPUs have been built for mostly only IGP case use.
It is most important feature but in this test has been showed as side feature.
One could think that Intel might have influence on these tests. Can wait to see how Intel+Vega chip will be tested. Discrete gaming will get more importance than IGP?
Normally you were suppose to show many IGP uses (gaming, media,...) and just mention discrete gaming as probably the least important information for this kind of products.
It would be good if you can rearrange the review.
I am adding more reviews to the review thread,and the 2400G can actually fight a GT1030 with DDR4?? Just,wow.
I am more impressed by the APUs then even the Ryzen CPU launch.
Does,FreeSync work OK with these APUs??
Edit!!
Hmm,on further reading there seems to be some niggles - not sure why AMD hasn't sorted them out yet as the the mobile versions have been around for a few months. Oh,well its to be expected from an AMD launch.
:p
Also,reading from other sites, it seems that there is a limit to how much of RAM can be assigned to IGP (2Gb).
What is your experience?
Do you think that 4Gb would dramatically change results for games or allowed higher quality modes?
Cause system with 16Gb of RAM, 12Gb will go for OS and 4Gb for IGP. Seems like good balance.
If you use 3200Mhz RAM, games should run better right?
Hi there,
There are actually seven benchmarks for discrete and seven for IGP - it may look like more for the former because the graphs are taller. The reason for the full set of discrete is twofold: we already have all the numbers from other comparable chips and, given that even the Vega IGP cannot muster enough horsepower in major titles, we reckon many users will add a discrete card to a system.
These look fantastic for budget gaming rigs - capable alone for entry level gaming and enough CPU grunt to drive a dGPU comfortably later if needed.
Given the price of GPUs and the limited time I have for gaming now I'm strongly considering selling my i7-6700 and GTX1070 and just running one of these, then adding a GTX2050(Ti) or AMD equivalent whenever they appear. My Switch is getting more gaming use than my PC now too.
Edit: Thinking logically that would be a stupid thing to do, I'll just chuck my ancient 7870xt in and sell the GTX 1070. If I were buying now from scratch though, I'd definitely have a 2400G.
Hi again,
The benchmarks were done with a 2GB UMA framebuffer and system memory at 2,933MHz CL14-14-14-34-1T. There is no current option to set a larger UMA space. Using 3,200MHz RAM of the same speed increases performance by around three-to-five per cent going by some quick tests.
The Stilt looked at performance at the loss of L3 cache seems to not really affect performance in most cases,and it seems there is actually a slight increase in core IPC overall.
Edit!!
Analysis from The Stilt:
https://forums.anandtech.com/threads...#post-39301964
https://i.imgur.com/0qXOU7x.pngQuote:
Some of my personal thoughts and experiences on Raven:
Based on the results of my test suite, the IPC of Raven varies between -4.8% - +2.8% compared to Zeppelin. The average difference being ~1.5% improvement. The difference is most likely a result of the changes made to the L2 & L3 caches, rather than the changes made to the actual Zen CPU cores themselves.
The early rumors were correct and Raven does in fact have a significantly lower L2 cache latency than Zeppelin does. The L2 cache in Raven has 12 CLK latency, whereas the L2 latency for Zeppelin is 17 CLKs. The L2 caches in Zeppelin never posed a limitation of any sort to the Fmax, so considering the halved L3 cache in Raven, getting rid of the “slack” in the L2 latency was a smart and most likely a highly beneficial move.
It is hard to tell exactly how small or large the penalty from the halved L3 cache is, as the L2 has been altered significantly at the same time. Generally, however the performance hit from the halved L3 cache varies between small and non-existent. Workloads which hit the > L1 caches hard, such as Bullet Physics library perform < 5% worse on Raven than on Zeppelin, which is equipped with twice the L3 cache per core. Considering that Bullet was relatively the worst performing workload in the whole test suite for Raven, it is rather safe to say that the hit from the smaller L3 cache is extremely minor in general.
The difference between the Vega 8 (8CU/2RB) iGPU and Vega 11 (11CU/2RB) iGPU at the same frequency is extremely minor, usually around 8-11% depending on the memory frequency. At stock Vega 8 operates at 1100MHz engine clock and Vega 11 at 1240MHz (1251MHz nominal) engine clock. The typical overclock for both of the variants is >= 1600MHz at 1.200V SoC voltage. Due to the present memory bandwidth limitation, both of them will perform almost the same when they are overclocked close or to the typical maximum frequency.
One major thing to consider prior overclocking the iGPU on Raven APUs is the power consumption. Most of the mainstream AM4 motherboards have a 2 phase VRM for the VDDCR_SoC voltage rail (in varying quality and with varying cooling as well), which on Raven not only supplies the SoC portion of the chip but the GPU cores as well. At stock the peak power consumption of Vega 11 is around 36W. When overclocked to the typical 1600MHz engine frequency, the power consumption will raise to 55-60W. While 60W doesn't sound too high, it is more than plenty for the average 2 phase VRM (around 25A per phase).
Just like Zeppelin, Raven also features the so called "OC-Mode". On Raven there are two separate triggers to activate the “OC-Mode”: by increasing the CPU frequency or by increasing the iGPU engine frequency. Triggering either one will get rid all of the limiters (power, current, utilization) and voltage controllers, the same way as it did on Zeppelin. The only difference is that by triggering just iGPU “OC-Mode”, the Turbo / XFR features of the CPU will not be lost like they were on Zeppelin. However at least for the time being, it is not advised to only trigger the iGPU “OC-Mode”: Activating either of the “OC-Modes” will disable all of the voltage controllers, meaning that when the Turbo / XFR will still remain active the CPU voltage will raise to extremely high levels. When the CPU “OC-Mode” is activated Turbo and XFR will be disabled as well, just like on Zeppelin and the CPU voltage will remain at reasonably sane levels due to the slightly lower resulting frequencies.
Activating either of the “OC-Modes” will also immediately disable the dLDO for the GPU cores. At stock the iGPU dLDO feeds on the VDDCR_SoC voltage rail and the typical voltage drop on the regulator is around 250mV. Once the “OC-Mode” is activated the GPU dLDO is placed in a bypass mode, meaning the GPU cores will then receive the source voltage directly without any further dropouts.
The memory controller on Raven clearly contains some changes in comparison to Zeppelin, however the said changes unfortunately appear to be rather minor and quite possibly affect more the firmwares of the controller than the actual hardware IP itself. On average the memory latency has decreased by ~3% at the same settings, but the bandwidth seems to have regressed slightly at the same time. Also, the highest achievable memory frequency seems to be exactly the same as on Zeppelin, 3400 - 3533MHz depending on the silicon quality, the motherboard and the DRAM modules used. Fortunately, at least the memory training speed and reliability has been vastly improved.
Similar to Zeppelin, the frequency headroom for the CPU cores themselves is very slim over the stock frequencies. The typical, highest practical CPU frequency will be around 3.85 - 3.95GHz depending on the silicon quality.
http://i.imgur.com/8Rch6JF.png
Quote:
Higher than the mentioned frequencies might be possible, however achieving them will require the voltage to be raised to a point where the power efficiency is long gone and the life time of the silicon is reduced. At frequencies beyond the inflation point (3.9GHz in the chart) the cost of the last 100MHz in frequency can easily be > 25% increase in the power consumption.
With the tested samples 4.1GHz could not be achieved even at 1.550V despite 4.0GHz was deemed stable at 1.375V, which is already high but still well in the realms of sustainable.
With Raven there is also another aspect, which is not present on Zeppelin: Unlike Zeppelin, Raven uses conventional TIM (instead of indium sTIM) between the core and the heatspreader. The conventional TIM used on Raven isn’t the only factor which affects it’s thermals either. Due to the extreme thinness of the Raven die, the heatspreader used for Raven AM4 APUs has been redesigned. Normally the contact surface inside the heatspeader is perfectly flat. The heatspreaders used on Raven have a “hump” inside them, which allows the heatspreader to make contact with the die itself. Without the “hump” the heatspreader would only make contact with the SMD components located around the die, which are standing taller than the die itself. The “hump” adds an extra 0.5mm to the heatspreader thickness and therefore increases the thermal resistance of the heatspreader as well.
Despite the Raven's slightly larger die size, the temperatures are still significantly higher at the same power dissipation and cooling. Even at a modest 65W power dissipation the CPU cores can reach excess of 70°C temperatures.
An aftermarket cooler is definitely recommended at least for the 2400G, especially if there is any plans to overclock the chip. 2400G at the stock configuration is already somewhat bound by the default 65W power limit and the chip can easily dissipate up to 120W of heat when it is overclocked to the typical maximum figures.
https://i.imgur.com/dMwRtn9.jpg
Some ballpark 3D performance figures, based on my own testing: RX 550 is around 22% faster and the RX 560 around 68% faster than a stock 2400G APU.
When the 2400G APU is overclocked to the typical maximum figures (1600MHz engine and 3400MHz DRAM) it’s performance is almost identical to a stock RX 550.
- 2400G at stock: 1240MHz engine, 2933MHz DRAM (3236 in 3DMark Fire Strike)
- 2400G at a typical max OC: 1600MHz engine, 3400MHz DRAM (3960 in 3DMark Fire Strike)
- RX 550 at stock: 1210MHz engine, 7000MHz (QDR) DRAM (3955 in 3DMark Fire Strike)
- RX 560 at stock: 1210MHz engine, 7000MHz (QDR) DRAM (5430 in 3DMark Fire Strike)
If you are unfamiliar with some of the terms used, please check the original Ryzen: Strictly Technical write-up.
These are perfect for what I want and need going forward....
WTF,is happening here:
https://www.pcgamesn.com/amd-raven-ridge-overclocking
Quote:
A weird bug feature has appeared during our testing of the Ryzen 5 2400G Raven Ridge APU that means our chip overclocks a by huge amount when you put it to sleep. You may have seen some leaked benchmarks appear online, and yes... they're true, it can hit 4.56GHz on air.
Check out the full review of the AMD Ryzen 5 2400G.
This bug feature is either in the darling little MSI B350I Pro AC motherboard that came as part of the Raven Ridge test kit, or the Ryzen 5 2400G APU itself. It sees one of them automatically overclocking the chip far beyond what I’ve been able to do in the BIOS, or with the Ryzen Master utility.
In my testing I’ve only been able to push the top Raven Ridge APU up to 4.05GHz using simple multiplier tweaking. I have been able to get the chip booting into Windows, and running some light gaming workloads, at 4.2GHz, but putting any serious CPU load onto it the chip falls over.
But, with the bizarre sleepy overclock, that same APU is able to top 4.56GHz and remain completely stable under full gaming and CPU testing loads.
AMD Raven Ridge overclocking
I discovered it completely by accident while testing the stability of my earlier overclock. I left the test bench to do something probably super-important, and when I came back it had put itself to sleep. On waking it up I noticed CPU-Z was reporting a much higher clockspeed because of the new BCLK setting.
Normally the 2400G runs at a base 100MHz with the multiplier helping to then create the 3.6GHz and 3.9GHz stock clockspeeds of the chip. Where it gets really weird is that neither the Ryzen Master utility, nor the MSI motherboard BIOS, allow you to tweak the BCLK.
Initially I assumed it was a mistake. Pre-release platforms often display weird results in monitoring apps - part of the fun of putting together launch day reviews - so I figured there was nothing to it. But after testing and retesting it became obvious the overclock had stuck and this mighty chip was overclocking like a hero.
It's potentially down to the C-state settings in the BIOS I've disabled due to some issues I had getting 3DMark to run on the AMD test platform at the beginning. It's also quite possible it's the old Ryzen sleep timer bug appearing again.
So,potentially with BCLK overclocking you could get decent overclocks,but it seems BCLK is locked down.Quote:
But it’s completely repeatable. Every time I reboot and drop it into sleepy time mode for a heartbeat the BCLK setting pushes itself up to a heady 112.50MHz. With the x40.5 multiplier I had in place that meant it was sitting pretty at 4.56GHz when it woke up.
At that speed the performance numbers are incredible. The 2400G hits around 1,000 and 187 for Cinebench's multi and single-threaded tests, making the $100 more expensive Intel Core i5 8600K look a little foolish. And, with a healthy 1.5GHz clockspeed on the Vega 11 GPU, the gaming performance gets mighty playable at the top 1080p game settings. You do need some speedy, pricey DDR4 memory to get the most out of the graphics cores - this Vega chip has no HBM2 to call its own - so that does affect the overall platform costs.
But it's also possible to use the overclock with a discrete GPU in place too. That gives it a heroic level of graphics support from such a budget slice of silicon.
Unfortunately I haven't been able to replicate the overclock in any other motherboard. The only one we have that allows manual overclocking of the BCLK is the Asus Crosshair VI Hero, and the pre-release BIOS update doesn't seem to allow any sort of overclocking on our Ryzen 5 2400G sample.
Now, the likelihood is that the sleepy overclock will get patched out of the platform, but please, AMD, give us the tools to tweak the BCLK ourselves across the board, it potentially makes a massive difference to the chip’s performance.
Hardware Unboxed managed to get upto 1.6GHZ for the IGP using the stock cooler.
Also,wow:
https://static.techspot.com/articles...atch_1080p.png
https://static.techspot.com/articles...atch_1080p.png
I know DX12 & Vulkan were meant to be working on combing dissimilar GPUs if the game supports it so i wonder if the G2000 series iGPU can be combined with a discreet GPU and how effective that would be, or even if it works yet.
Well technically multi GPU support isn't SLI or crossfire (iirc) as it uses dissimilar GPUs so you could use an AMD iGPU with an Nvidia discreet GPU, last time i read about it was when Oxide were playing around with it in Ashes of the Singularity.
Oxide tend to do forward looking stuff,but sadly lots of games just use plastered over versions of older engines,hence why DX12/Vulkan is not really widespread enough,and AFAIK what you are talking about needs DX12. We shifted over to AoTS as our major LAN RTS game,from Sup Com and it runs far better TBH.
I mean even Bethesda CBA patching some of their own games for Ryzen and they are an AMD partner.
I think this benchmark alone has just sold me on the Ryzen 5 2400G:
https://www.anandtech.com/show/12425...2400g-review/5
Civ 6 on Ultra settings at 1080p averaging over 32fps... :woowoo:
AMD partner is hardly relevant. The telling thing is that Bethesda CBA full stop.
Certainly for the games they are actually well known for TES and Fallout. That id know how to program good engines is well known so don't let Doom being published by Bethesda fool you, but TES has always had a very poor engine going all the way back to Morrowind*.
Recently bought TW3. Haven't played it much yet, but just looking at without any mods made me realise how poor Skyrim is and how no amount of texture mods are going to do anything about the poor models.
*Actually, Daggerfall already had a poor engine in many ways. Think it was for this Terminator thing which used an update of that engine, that Bethesda advertised as a ground-breaking engine and having fallen through the ground so many times in one of the endless randomly generated Dagggerfall dungeon, that's what I called it: the BugSoft ground-breaking-engine™.
I think its a kick to teeth,when you partner with a company,and the last game they made and the most played game they have made or published on Steam currently,has had no patches for Ryzen,but had loads for the Creation Club,etc over the last year. Then they make VR versions of the game.
This is part of the problem,why we don't see better utilisation of newer tech like DX12/Vulkan as even companies making $100 of million or even over a $1 billion on a game,CBA to even support their own games properly. I mean look at PUBG - its really poorly optimised too and even reviewers are realising this,and they have had loads of dosh from Early Access. ARK was another one - only now after over two years does it remotely run OKish.
Not a single one of these games seem to be DX12/Vulkan so just cram in assets and seem to take a sledgehammer approach to optimisation and these are NEW games too.
I still remember back to Skyrim,Bethesda used X87 instructions which were very inefficient on all CPUs,especially AMD ones(both Intel and AMD basically had advised to stop using it). They did nothing for months and the community made a mod called Skyboost which made sure more effficient SSE type instructions were used. Apparently,a while later they released a patch to sort it out,probably out of embarassment. I actually asked AMD on a Q and A session about it,and I honestly think they are not even aware of how poorly Fallout 4 runs,relative to other games,even to the extent Jim from AdoredTV noticed it too.
Regarding the mods,the community has modded literally everything out of the game,down to the character models,animations,everything and that applies to both versions of Skyrim and FO4. Even the settlement system in FO4 has been revamped with stuff like Sim Settlements,etc. Even the Bethesda HD texture pack is a joke at over 50GB. There are loads of HD packs done by the community which improve everything and simply do a better job and use less resources and HDD space.
Bethesda owes a lot to its community,and its just dissapointing for all that great work they do,they CBA to even try and make their games run better on more CPUs.
Also Jim Sterling has invented a term for Bethesda,Bethetic.
:p
Performance would be much higher if AMD could have included a high speed 'Crystalwell' eDRAM cache just like intel iris Pro
I was wondering if the tests were done with the Meltdown and Spectre patch(es) in place. Especially the Meltdown patch has the potential to slow down Intel's processors quite a bit, whereas Ryzen isn't affected by this at all (i.e. doesn't need the Meltdown patch).
From the article:
The 7600K is a tad lower than the 8350K, and the main difference between the two is that the 7600K has half the L3 cache. I suspect that is at least as important as the PCIe here.Quote:
We can surmise that having a x8 PCIe interface is hurting a touch here.
It is only older Intel CPUs that are hit hard, it shouldn't really matter on these.
IIRC Freesync over HDMI depends on what version of port is on the motherboard (1.2+) and if the monitor supports it.
Hi Tarinder
I kindly disagree.
People buying these APU's will mostly be playing e-sports games and touch on some heavier games as your own results did show these APU's made games playable that were not with Intel based APU's. I fully agree with the comment from darcotech. Those APU's are strong enough for most AAA games at low settings and e-sports games at even high settings. anyone even remotely interested should not be buying a discrete GPU for this.
I can see how someone could start with this and get a RX580/GTX1060 later, but I personally would rather just keep it as it is for E-sport games and multi media
You should have focused more on IGP performance and added E-Sport games as this will be the main market for this APU. If you look at the recent steam survey you will notice that GT1030's are the most popular. Even for WOW players on a budget this would be perfect.
If you bring a discrete GPU in the mix then you kill most of the value proposition.
Talking about WoW,after looking through the reviews,PCGH did actually test it.
FreeSync would be ideal for an APU,as overall it would give a smoother experience I suspect than a lower end card. OFC,it probably only makes sense if you are doing a totally new build or need a replacement monitor,but still it would be nice if it could be tested out at some point against cards like the GT1030 and GTX1050.
i would love the 2400g in a laptop
2200G is £10 cheaper than i3-8100, and for now at least the total platform cost is much cheaper for AMD because there are still only Z series boards out for Coffee Lake.
Similarly for the 2400G, you've got 8 threads vs the 6 of the i5-8400 so despite having 2 fewer cores the 2400G isn't that far behind in MT benchmarks. Again, for the time being at least, Coffee Lake CPUs are limited to Z370 boards which start from ~£90, whereas you can put a 2400G in a 320m board for comfortably <£50, or B350 board from around £60 if you want to overclock.
I'd hardly say you kill the value proposition if you take the IGP out of the equation!
My daughter runs an old FM2+ APU with a discrete graphics card but didn't always. Part of the value proposition is upgradability.
OTOH I always cringe at pifast. What is that proving again? Note: Anyone who says single threaded performance gets laughed at, complete with pointing.
Calculating pi to 3m places is totally useful! Just look at all the times CPU reviewers do it - seconds saved there would save them a few minutes over a year!
It's amusing that the cheapest comparable system with a dGPU is one of these APU's with a 1030 bunged in. Comparable ryzen CPUs cost more (although the 1500X has had a price cut, putting it £5 below the 2400G on scan), and intel systems basically do not enter this end of the market (what with only Z motherboards). Sure the 1030 is a bit faster, but fast enough to warrant at least an extra £65 when you've already got the vega bit?
That's basically an R7 2700U (although down 1 CU, and only 15W)
As much as I agree with this concept, how would you go about benching these? CS:GO has a lovely bench workshop mod, I know, and half of these games I have not delved into, but Dota for example has a DX and a Vulkan flavour, and no benchmark tool, and frequently is noticably more stressful during teamfights than just at beginning.
Edit: solution to Dota might be to benchmark a replay of a pro game with a specific caster directing camera. Pick a time range with interesting moments. Not sure if replays gain or lose performance over a live game, but it'd be more consistent.
World of tanks has a new engine coming SoonTM, and to help people prepare they've put out a benchmarking tool for the new engine
https://worldoftanks.com/en/news/gen.../2018-preview/
http://wotencore.net/
I'm not surprised, a lot of people seem to be getting disillusioned by PUBG because of poor optimisation and critical bugs etc. Whereas Fortnite seems more dynamic and available to "muck about" it.
Just like every other game that doesn't have an artificial bench tool, take multiple samples from real world gaming use cases and average them.
The 2400G has already sold out on Amazon, Scan still have them in stock.. Looks like the selling like hot cakes prediction might be coming true.
I would like to see some strategy game benchmarks at various setting levels like Civ 6, Stellaris and maybe Rome 2 at 1080p personally. If the 2400G can do 30+fps on Civ 6 on Ultra settings, that bodes well for high and medium settings.. but it might be worth confirming that in it's own benchmark.
Edit: Found some benchmarks on Civ 6 1080p on medium settings on Tom's hardware: http://www.tomshardware.co.uk/amd-ry...w-34205-7.html
With eSports, I've seen elsewhere the 2400G can do 100fps on CS:GO at 1080p on medium settings implying I suppose that High/Max settings might be around the 80 and 60fps marks respectively but I'm just guessing to be honest. Apparently some smoke effects can cause problems for the apus though.
Rainbow 6 Siege is considered an eSport now too and the Ryzen apus do well at 1080p albeit on low settings according to Joker Productions. DOTA 2, LoL, and Rocket League seem to be fine on High settings at 1080p.
PUBG is an unoptimised mess at the moment though, so it's hard to say if benchmarking the ryzen apus with that will be fair in the long run.
The Pentium G4560 (~£55) and Core i3 7100 (~£95) are plentifully available at retail, and are effectively Intel's current direct competition to these given the motherboard considerations etc. There might be a case for the Pentium + a GT 1030 @ £120; it would give you (roughly) the 2400G's GPU performance and the 2200G's CPU performance at an in-between cost. The i3 + GT 1030, OTOH, costs more than a 2400G and only gets you faster ST performance - for everything else it's equal or slower. Plus no overclocking for either of those Intel processors. There's definite value in the Ryzen APUs.
Some multiplayer benchmarks of the Ryzen 5 2400G in OW and BF1:
https://www.youtube.com/watch?v=210tkGvTTiA
https://www.youtube.com/watch?v=K0cxIRp8Q0g
I am impressed.
Edit!!
MP results for the Ryzen 3 2200G in OW,PUBG and Fortnite:
https://www.youtube.com/watch?v=7gSrGlax2JM
https://www.youtube.com/watch?v=_NuZG5_sdsU
https://www.youtube.com/watch?v=T9WiseBU9sI
Yea, the BF1 results look very good.. I had remind myself there was no discreet card involved! Pretty darn impressive.
TechEpiphany has certainly been busy! Here's Ryzen 5 2400G PUBG results:
https://www.youtube.com/watch?v=6EwHO8siFpY
If they actually try and optimise PUBG, Low or maybe Medium settings @ 30fps might be possible. I get the impression from the comments that "pro players" play on Low or Very Low settings anyway.
Paying a bit more than the R3 chip for a 2C4T end-of-life system is not really comparable. Everyone knows* that 4C4T systems are better than 2C4T, especially for multiplayer games (hence the recommendations for i5 as minimum for the past decade), so while the pentium system is the closest thing in price for performance it'll have some glaring issues
*read: don't have a benchmark to hand