Read more.Ryzen CPU grunt, Vega GPU prowess in one chip.
Read more.Ryzen CPU grunt, Vega GPU prowess in one chip.
KN1GHT (13-02-2018)
I am sad to see that Hexus showed much more "discrete gaming" than IGP, knowing all good that these CPUs have been built for mostly only IGP case use.
It is most important feature but in this test has been showed as side feature.
One could think that Intel might have influence on these tests. Can wait to see how Intel+Vega chip will be tested. Discrete gaming will get more importance than IGP?
Normally you were suppose to show many IGP uses (gaming, media,...) and just mention discrete gaming as probably the least important information for this kind of products.
It would be good if you can rearrange the review.
The more you live, less you die. More you play, more you die. Isn't it great.
KN1GHT (13-02-2018)
I am adding more reviews to the review thread,and the 2400G can actually fight a GT1030 with DDR4?? Just,wow.
I am more impressed by the APUs then even the Ryzen CPU launch.
Does,FreeSync work OK with these APUs??
Edit!!
Hmm,on further reading there seems to be some niggles - not sure why AMD hasn't sorted them out yet as the the mobile versions have been around for a few months. Oh,well its to be expected from an AMD launch.
Last edited by CAT-THE-FIFTH; 12-02-2018 at 03:45 PM.
KN1GHT (13-02-2018)
Also,reading from other sites, it seems that there is a limit to how much of RAM can be assigned to IGP (2Gb).
What is your experience?
Do you think that 4Gb would dramatically change results for games or allowed higher quality modes?
Cause system with 16Gb of RAM, 12Gb will go for OS and 4Gb for IGP. Seems like good balance.
If you use 3200Mhz RAM, games should run better right?
The more you live, less you die. More you play, more you die. Isn't it great.
Hi there,
There are actually seven benchmarks for discrete and seven for IGP - it may look like more for the former because the graphs are taller. The reason for the full set of discrete is twofold: we already have all the numbers from other comparable chips and, given that even the Vega IGP cannot muster enough horsepower in major titles, we reckon many users will add a discrete card to a system.
These look fantastic for budget gaming rigs - capable alone for entry level gaming and enough CPU grunt to drive a dGPU comfortably later if needed.
Given the price of GPUs and the limited time I have for gaming now I'm strongly considering selling my i7-6700 and GTX1070 and just running one of these, then adding a GTX2050(Ti) or AMD equivalent whenever they appear. My Switch is getting more gaming use than my PC now too.
Edit: Thinking logically that would be a stupid thing to do, I'll just chuck my ancient 7870xt in and sell the GTX 1070. If I were buying now from scratch though, I'd definitely have a 2400G.
Last edited by Bagnaj97; 12-02-2018 at 04:27 PM.
KN1GHT (13-02-2018)
Hi again,
The benchmarks were done with a 2GB UMA framebuffer and system memory at 2,933MHz CL14-14-14-34-1T. There is no current option to set a larger UMA space. Using 3,200MHz RAM of the same speed increases performance by around three-to-five per cent going by some quick tests.
The Stilt looked at performance at the loss of L3 cache seems to not really affect performance in most cases,and it seems there is actually a slight increase in core IPC overall.
Edit!!
Analysis from The Stilt:
https://forums.anandtech.com/threads...#post-39301964
Some of my personal thoughts and experiences on Raven:
Based on the results of my test suite, the IPC of Raven varies between -4.8% - +2.8% compared to Zeppelin. The average difference being ~1.5% improvement. The difference is most likely a result of the changes made to the L2 & L3 caches, rather than the changes made to the actual Zen CPU cores themselves.
The early rumors were correct and Raven does in fact have a significantly lower L2 cache latency than Zeppelin does. The L2 cache in Raven has 12 CLK latency, whereas the L2 latency for Zeppelin is 17 CLKs. The L2 caches in Zeppelin never posed a limitation of any sort to the Fmax, so considering the halved L3 cache in Raven, getting rid of the “slack” in the L2 latency was a smart and most likely a highly beneficial move.
It is hard to tell exactly how small or large the penalty from the halved L3 cache is, as the L2 has been altered significantly at the same time. Generally, however the performance hit from the halved L3 cache varies between small and non-existent. Workloads which hit the > L1 caches hard, such as Bullet Physics library perform < 5% worse on Raven than on Zeppelin, which is equipped with twice the L3 cache per core. Considering that Bullet was relatively the worst performing workload in the whole test suite for Raven, it is rather safe to say that the hit from the smaller L3 cache is extremely minor in general.
The difference between the Vega 8 (8CU/2RB) iGPU and Vega 11 (11CU/2RB) iGPU at the same frequency is extremely minor, usually around 8-11% depending on the memory frequency. At stock Vega 8 operates at 1100MHz engine clock and Vega 11 at 1240MHz (1251MHz nominal) engine clock. The typical overclock for both of the variants is >= 1600MHz at 1.200V SoC voltage. Due to the present memory bandwidth limitation, both of them will perform almost the same when they are overclocked close or to the typical maximum frequency.
One major thing to consider prior overclocking the iGPU on Raven APUs is the power consumption. Most of the mainstream AM4 motherboards have a 2 phase VRM for the VDDCR_SoC voltage rail (in varying quality and with varying cooling as well), which on Raven not only supplies the SoC portion of the chip but the GPU cores as well. At stock the peak power consumption of Vega 11 is around 36W. When overclocked to the typical 1600MHz engine frequency, the power consumption will raise to 55-60W. While 60W doesn't sound too high, it is more than plenty for the average 2 phase VRM (around 25A per phase).
Just like Zeppelin, Raven also features the so called "OC-Mode". On Raven there are two separate triggers to activate the “OC-Mode”: by increasing the CPU frequency or by increasing the iGPU engine frequency. Triggering either one will get rid all of the limiters (power, current, utilization) and voltage controllers, the same way as it did on Zeppelin. The only difference is that by triggering just iGPU “OC-Mode”, the Turbo / XFR features of the CPU will not be lost like they were on Zeppelin. However at least for the time being, it is not advised to only trigger the iGPU “OC-Mode”: Activating either of the “OC-Modes” will disable all of the voltage controllers, meaning that when the Turbo / XFR will still remain active the CPU voltage will raise to extremely high levels. When the CPU “OC-Mode” is activated Turbo and XFR will be disabled as well, just like on Zeppelin and the CPU voltage will remain at reasonably sane levels due to the slightly lower resulting frequencies.
Activating either of the “OC-Modes” will also immediately disable the dLDO for the GPU cores. At stock the iGPU dLDO feeds on the VDDCR_SoC voltage rail and the typical voltage drop on the regulator is around 250mV. Once the “OC-Mode” is activated the GPU dLDO is placed in a bypass mode, meaning the GPU cores will then receive the source voltage directly without any further dropouts.
The memory controller on Raven clearly contains some changes in comparison to Zeppelin, however the said changes unfortunately appear to be rather minor and quite possibly affect more the firmwares of the controller than the actual hardware IP itself. On average the memory latency has decreased by ~3% at the same settings, but the bandwidth seems to have regressed slightly at the same time. Also, the highest achievable memory frequency seems to be exactly the same as on Zeppelin, 3400 - 3533MHz depending on the silicon quality, the motherboard and the DRAM modules used. Fortunately, at least the memory training speed and reliability has been vastly improved.
Similar to Zeppelin, the frequency headroom for the CPU cores themselves is very slim over the stock frequencies. The typical, highest practical CPU frequency will be around 3.85 - 3.95GHz depending on the silicon quality.
Higher than the mentioned frequencies might be possible, however achieving them will require the voltage to be raised to a point where the power efficiency is long gone and the life time of the silicon is reduced. At frequencies beyond the inflation point (3.9GHz in the chart) the cost of the last 100MHz in frequency can easily be > 25% increase in the power consumption.
With the tested samples 4.1GHz could not be achieved even at 1.550V despite 4.0GHz was deemed stable at 1.375V, which is already high but still well in the realms of sustainable.
With Raven there is also another aspect, which is not present on Zeppelin: Unlike Zeppelin, Raven uses conventional TIM (instead of indium sTIM) between the core and the heatspreader. The conventional TIM used on Raven isn’t the only factor which affects it’s thermals either. Due to the extreme thinness of the Raven die, the heatspreader used for Raven AM4 APUs has been redesigned. Normally the contact surface inside the heatspeader is perfectly flat. The heatspreaders used on Raven have a “hump” inside them, which allows the heatspreader to make contact with the die itself. Without the “hump” the heatspreader would only make contact with the SMD components located around the die, which are standing taller than the die itself. The “hump” adds an extra 0.5mm to the heatspreader thickness and therefore increases the thermal resistance of the heatspreader as well.
Despite the Raven's slightly larger die size, the temperatures are still significantly higher at the same power dissipation and cooling. Even at a modest 65W power dissipation the CPU cores can reach excess of 70°C temperatures.
An aftermarket cooler is definitely recommended at least for the 2400G, especially if there is any plans to overclock the chip. 2400G at the stock configuration is already somewhat bound by the default 65W power limit and the chip can easily dissipate up to 120W of heat when it is overclocked to the typical maximum figures.
Some ballpark 3D performance figures, based on my own testing: RX 550 is around 22% faster and the RX 560 around 68% faster than a stock 2400G APU.
When the 2400G APU is overclocked to the typical maximum figures (1600MHz engine and 3400MHz DRAM) it’s performance is almost identical to a stock RX 550.
- 2400G at stock: 1240MHz engine, 2933MHz DRAM (3236 in 3DMark Fire Strike)
- 2400G at a typical max OC: 1600MHz engine, 3400MHz DRAM (3960 in 3DMark Fire Strike)
- RX 550 at stock: 1210MHz engine, 7000MHz (QDR) DRAM (3955 in 3DMark Fire Strike)
- RX 560 at stock: 1210MHz engine, 7000MHz (QDR) DRAM (5430 in 3DMark Fire Strike)
If you are unfamiliar with some of the terms used, please check the original Ryzen: Strictly Technical write-up.
Last edited by CAT-THE-FIFTH; 12-02-2018 at 06:41 PM.
These are perfect for what I want and need going forward....
Old puter - still good enuff till I save some pennies!
WTF,is happening here:
https://www.pcgamesn.com/amd-raven-ridge-overclocking
A weird bug feature has appeared during our testing of the Ryzen 5 2400G Raven Ridge APU that means our chip overclocks a by huge amount when you put it to sleep. You may have seen some leaked benchmarks appear online, and yes... they're true, it can hit 4.56GHz on air.
Check out the full review of the AMD Ryzen 5 2400G.
This bug feature is either in the darling little MSI B350I Pro AC motherboard that came as part of the Raven Ridge test kit, or the Ryzen 5 2400G APU itself. It sees one of them automatically overclocking the chip far beyond what I’ve been able to do in the BIOS, or with the Ryzen Master utility.
In my testing I’ve only been able to push the top Raven Ridge APU up to 4.05GHz using simple multiplier tweaking. I have been able to get the chip booting into Windows, and running some light gaming workloads, at 4.2GHz, but putting any serious CPU load onto it the chip falls over.
But, with the bizarre sleepy overclock, that same APU is able to top 4.56GHz and remain completely stable under full gaming and CPU testing loads.
AMD Raven Ridge overclocking
I discovered it completely by accident while testing the stability of my earlier overclock. I left the test bench to do something probably super-important, and when I came back it had put itself to sleep. On waking it up I noticed CPU-Z was reporting a much higher clockspeed because of the new BCLK setting.
Normally the 2400G runs at a base 100MHz with the multiplier helping to then create the 3.6GHz and 3.9GHz stock clockspeeds of the chip. Where it gets really weird is that neither the Ryzen Master utility, nor the MSI motherboard BIOS, allow you to tweak the BCLK.
Initially I assumed it was a mistake. Pre-release platforms often display weird results in monitoring apps - part of the fun of putting together launch day reviews - so I figured there was nothing to it. But after testing and retesting it became obvious the overclock had stuck and this mighty chip was overclocking like a hero.
It's potentially down to the C-state settings in the BIOS I've disabled due to some issues I had getting 3DMark to run on the AMD test platform at the beginning. It's also quite possible it's the old Ryzen sleep timer bug appearing again.So,potentially with BCLK overclocking you could get decent overclocks,but it seems BCLK is locked down.But it’s completely repeatable. Every time I reboot and drop it into sleepy time mode for a heartbeat the BCLK setting pushes itself up to a heady 112.50MHz. With the x40.5 multiplier I had in place that meant it was sitting pretty at 4.56GHz when it woke up.
At that speed the performance numbers are incredible. The 2400G hits around 1,000 and 187 for Cinebench's multi and single-threaded tests, making the $100 more expensive Intel Core i5 8600K look a little foolish. And, with a healthy 1.5GHz clockspeed on the Vega 11 GPU, the gaming performance gets mighty playable at the top 1080p game settings. You do need some speedy, pricey DDR4 memory to get the most out of the graphics cores - this Vega chip has no HBM2 to call its own - so that does affect the overall platform costs.
But it's also possible to use the overclock with a discrete GPU in place too. That gives it a heroic level of graphics support from such a budget slice of silicon.
Unfortunately I haven't been able to replicate the overclock in any other motherboard. The only one we have that allows manual overclocking of the BCLK is the Asus Crosshair VI Hero, and the pre-release BIOS update doesn't seem to allow any sort of overclocking on our Ryzen 5 2400G sample.
Now, the likelihood is that the sleepy overclock will get patched out of the platform, but please, AMD, give us the tools to tweak the BCLK ourselves across the board, it potentially makes a massive difference to the chip’s performance.
Hardware Unboxed managed to get upto 1.6GHZ for the IGP using the stock cooler.
Also,wow:
https://static.techspot.com/articles...atch_1080p.png
Last edited by CAT-THE-FIFTH; 12-02-2018 at 06:43 PM.
I know DX12 & Vulkan were meant to be working on combing dissimilar GPUs if the game supports it so i wonder if the G2000 series iGPU can be combined with a discreet GPU and how effective that would be, or even if it works yet.
Well technically multi GPU support isn't SLI or crossfire (iirc) as it uses dissimilar GPUs so you could use an AMD iGPU with an Nvidia discreet GPU, last time i read about it was when Oxide were playing around with it in Ashes of the Singularity.
Oxide tend to do forward looking stuff,but sadly lots of games just use plastered over versions of older engines,hence why DX12/Vulkan is not really widespread enough,and AFAIK what you are talking about needs DX12. We shifted over to AoTS as our major LAN RTS game,from Sup Com and it runs far better TBH.
I mean even Bethesda CBA patching some of their own games for Ryzen and they are an AMD partner.
I think this benchmark alone has just sold me on the Ryzen 5 2400G:
https://www.anandtech.com/show/12425...2400g-review/5
Civ 6 on Ultra settings at 1080p averaging over 32fps...
AMD partner is hardly relevant. The telling thing is that Bethesda CBA full stop.
Certainly for the games they are actually well known for TES and Fallout. That id know how to program good engines is well known so don't let Doom being published by Bethesda fool you, but TES has always had a very poor engine going all the way back to Morrowind*.
Recently bought TW3. Haven't played it much yet, but just looking at without any mods made me realise how poor Skyrim is and how no amount of texture mods are going to do anything about the poor models.
*Actually, Daggerfall already had a poor engine in many ways. Think it was for this Terminator thing which used an update of that engine, that Bethesda advertised as a ground-breaking engine and having fallen through the ground so many times in one of the endless randomly generated Dagggerfall dungeon, that's what I called it: the BugSoft ground-breaking-engine™.
There are currently 1 users browsing this thread. (0 members and 1 guests)