Are AMD stretching the meaning of maximum boost?
https://www.youtube.com/watch?v=WXbCdGENp5I
Printable View
Are AMD stretching the meaning of maximum boost?
https://www.youtube.com/watch?v=WXbCdGENp5I
AMD seem to be stretching a lot of things in marketing videos these days. Not impressed (by that aspect, they don't need to, their chips are great on their own)
With RDNA claimed to give 50% improvements at same power & same configuration, AMD could get massive gains on the 4000 series APUs just by upgrading zen+ to zen2 and GCN to RDNA (and have a tiny chip if it's on 7nm)
zen2 TR might be able to use salvaged epyc IO dies? Even if it's just IO, it's still a massive chip by AMD CPU standards so that might be useful. 32 cores is still fine for the top end, until intel start competing properly - and if it is ~$15 per chiplet (I agree with kompukare's maths) then 16 cores will match the old product stack with the same CPU grunt between top end AM4 and TR while offering insane bandwidth (4 x4 core dies would also help with this) and not costing much
Something really interesting to note regarding inter-core latency - something I've not seen any English-speaking sites pick up on, even the ones who were quick to criticise this for Zen1 :rolleyes: - latency on the multi-die processors is as consistent as single-die variants as all inter-CCX data passes through the IO die, and latency is improved vs Gen1 despite this!
https://www.reddit.com/r/Amd/comment..._data_latency/
Also confirmation/explanation from Robert Hallock: https://twitter.com/Thracks/status/1...316505602?s=19
Not sure what's going on with the Zen+ results from that site though, they seem weirdly high?
Another thing worth noting, at least in the tested processor, it seems a CCX comprises of three cores. I wonder if that's how all the 6/12C processors are configured, with 3C CCXs consistently across the dies? Amongst other benefits, you have even heat loading, bandwidth distribution, etc.
A couple of related points here, that feed off my 18 core TR speculation. Previous gens of Ryzen all followed the evenly spread number of cores per CCX when reducing core count - so 4+4, 3+3, 2+2, etc. That was one of the reasons I speculated we might see 18 core TR: it could be made from 3 salvaged 3+3 core chiplets. Reported yields are really high, so I'm not sure 2+2 core chiplets are going to be common enough to build a product stack around.
I'm pretty confident that the 3900X will be made up entirely of 3+3 core chiplets. We know that IO dies work with fewer chiplets than links, or you *couldn't* do 1 and 2 chiplet AM4 processors. So a sensible TR product stack - to me at least - would use 3 3+3 chiplets for an 18 core product, 4 4+4 chiplets for a 32 core product, then either 3 4+4 chiplets, or 4 3+3 chiplets, for 24 cores. Possibly the latter, as that uses more binned/salvaged parts?
As I say, given the reportedly very high yields I'm just not convinced that it's going to be worth AMD binning to 2+2 chiplets. And they seem to be aiming for higher boost clocks as you go up the stack for Ryzen 3000, so they're only going to want top bin parts going to TR (which, tbf, if also what they've done in previous generations).
It's going to be interesting, anyway. As to IO dies, I can see Xlucine's point about using binned EPYC IO chiplets: since you'll be ditching half the memory channels and PCIe lanes, you could salvage a lot of dies with minor faults to produce functional TR chips (and going that way might let them produce higher core count TR processors, too...)
Have you done what I've done in the past an confused CCXs with the die, or what AMD are now calling CCDs. CCX data should only need to pass through the I/O die if it's headed for a CCX on another CCD (or some other I/O), each CCD has two CCXs so technically inter-CCX on the same CCD shouldn't need to go to the I/O die.
Each CCD comprises of two CCX and each CCX consists of four cores so it's 8 cores per CCD (per die/chiplet), all the 32Mb cache SKUs contain a single CCD and all the 64MB ones contain two CCD, what's more interesting IMO about that is lower core count SKUs still come with the same amount of L3.
Lastly it's hard to know why they got those results on Zen+, it could be they're measuring different source and destinations, it could be they're using slower RAM so the data fabric is clocked slower, it could be they're measuring beyond first word access times, or something else, i don't read Russian. :)
I seem to recall reading somewhere that the EPYC IO die was designed to be literally cut in half? It might have been speculation though so don't quote me on that.
Nope, check the links I posted - what I said is confirmed through an AMD rep and testing.
Directly quoting Robert Hallock (emphasis mine):
It sounds counter-intuitive at first but it is indeed what's happening.Quote:
Yes. All CCX<->CCX communication traverses the IOD, meaning all CCXes communicate at a common latency. Same for cache. Same for DRAM. From the perspective of the system, this is monolithic die behavior.
A few ns of wire latency notwithstanding. :)
Ignore me, think i got myself confused again...
On second thoughts that seems a rather dumb choice if they have done that, why would you send data out of a piece of silicon only to have it immediately return.
I mean i know we're only talking signal transmission time so it's not really about the time but surely doing something like that adds to the complexity, and costs.
Signal transmission costs power and time as you are charging an RC network, but the alternative would be to add another layer/tier to the hierarchy and I'm sure AMD will have done extensive simulations and concluded this was the best answer. Heck the simplest answer would be to make a CCX of 8 cores and again I'm sure AMD would have simulated that but once again a cluster of 4 cores seems to be the best balance.
Wires are pretty cheap though, so I'm sure cost wasn't an issue either way.
Cool, the process is in line with what I was expecting, but the results are better than I'd hoped for.
Windows knows to bucket fill each CCX too. It would be interesting to know which apps heavily use inter-core data transfer/lookup and how threaded they are - if only 3-4 threads is usual then this is a very nice architecture.
For sure, they obviously know better than me but i would've thought, logically speaking, that allowing each cluster of four cores to speak directly to the other 4 cores on the die would've made more sense, and having a single entry/exit point (SerDes) for the entire die would've been more logical...although having thought about the single SerDes thing maybe that's why they choose the more convoluted route as having two SerDes on each die, one for each CCX, probably means less chance of a bottleneck vs the small hit in latency.
SerDes are tiny so space probably wouldn't be much of a concern, going on what watercooled posted there must be two per CCD, one for each CCX, and they're directly connect to, what i imagine would be, one of multiple SerDes on the I/O die.
Is it just me, or does ryzen master hog CPU time? Task manager shows it taking 10-20% of the CPU when I have it open in the background
This is interesting:
https://www.youtube.com/watch?v=xQUAuYxWam4
Further evidence that zen2 performance does vary depending on the rest of the system, but not by enough to worry over - the cheapest A320 board going was only 100 MHz down on a top end X570 board in a stress test. The X570 board might not have had PBO et all turned on, but it's still impressive (4 cheap CPU phases with no heatsink managing a 12 core CPU without major issue - it was well above the base clock, which is meant to be the worst case scenario)
I thought those low end A320 boards were usually only rated for 65W! That is still power enough for a 3700X though, which is pretty awesome for a cheap board.
I haven't heard anything positive about PBO yet, it seems to just make things hotter for minimal performance gain. I think that is because the plain non-overclocking Precision Boost is very impressive so there isn't much left to squeeze out.
Edit: I looked it up, that Gigabyte motherboard really does support 105W cpus. Nice.
It is indeed. My current impression is that unless you can forsee actually needing (or using) PCIe4 inside of the next 3 or less years, the extra money for x570 is generally wasted. I suspect that next year the x670 or whatever it will be called will be a lot cooler running. Probably the B650 will have PCIe4 making really decent £100 ish PCIe4 motherboards available.
I've had a quick look at PCIe ratification dates and PCIe4 was ratified June 8th 2017, PCIe5 was ratified 29th May 2019 so I don't expect PCIe5 in consumer kit until summer 2021. Probably with DDR5 and a new socket for AMD as well.
To add to that, if you've already got an x470/B450 motherboard, there's very little reason to upgrade currently even if you do buy a Ryzen 3000 series CPU.
I'm still delaying upgrading my I5-3570k as although the Ryzen 3000 series is waaaaaaaaay faster, I'm not CPU limited enough yet.
EDIT: Just thought - overclockers may disagree with me. Take all of that above as completely ignoring overclocking capability.
I just noticed - navi CUs take up less die area than GCN CUs (compare 5700XT with Mi60/R7 - 3.9 mm^2/CU, compared to 5.2 mm^2/CU for GCN). Uncore like the fixed function decode would be expected to be constant in size, and so make the R7 number better; the memory controller will vary though but that ought to be 1) proportional to CU count and 2) R7 is HBM anyway, so should have that off-boarded. I predict we'll see >11 CUs in the 4000 series APUs, which gets even better with how navi beats GCN on a per-CU basis in games
That's pretty much my opinion on the matter too TBH. Gain a bit of e-peen if anything, and often totally destroy power efficiency in the process.
Enjoyed catching up on this discussion. Didn't watch the videos though (one playing in the background for audio).
A few questions please....
What does this 'chiplet' talk refer to?
How far is the 3700x overclocking?
Has anyone done a decent IPC / increased cache size comparison of clock for clock say R7 1700 14nm vs 3600x 7nm? Say as in running them both at 4GHz per thread?
Thanks in advance for the help here. I don't need to try and look into the detail of inter die and memory controller latency reductions I feel.
Chiplets are the individual die that are used in the multi die layout in a multi chip module that AMD are using. A review of the 3x00 series will have diagrams and pictures of the two chiplets that make up a 3700X for example.
3700X for me doesn't seem even worth attempting overclocking. That tends to lock all cores at the overclocked frequency which is sometimes a tiny win, losing the stock power efficiency which I consider a big win. My chip can boost a single core or two up to those sorts of frequencies anyway, so for my usage where I want high clocks (code linking for example or lightly threaded games) the stock chips works fine for me.
As for IPC, is it very workload dependent. There are two big changes over earlier ryzen chips: the floating point units are better and the L3 cache is huge. The floating point units help crank through stuff like rendering, but I wanted the L3 cache for code compiling. For my programming use this chip is the stuff of legends, laughing in the face of the Xeons I'm used to using :D
DancesWithUnix I hope you are well.
I was rather hoping someone would put me off spending more money. Sadly...
The performance of my Ryzen 5 3600 is great for the money, but I'm still a bit unsure about the voltage and temps... They always seem a bit too high, especially seeing as it's a 65W chip and at the lower end of the Ryzen 3000 series.
Voltages reach 1.450V when the cores hit 4.2GHz, and the voltages usually hover around 1.375V under load when all cores are being used (usually at 4.05GHz with all cores in use). I'm seeing temps of around 77C when gaming and that's also with a Noctua NH-U12A cooler which is one of the best, if not the best air cooler out there! Case cooling is also good. The motherboard I'm using is the Asrock X470 Taichi with the latest BIOS available (3.60).
VRM temps, chipset temps, drive temps, GPU temps and RAM temps are all fine. Even the side panel of the case gets pretty warm behind the CPU area after a bit of gaming (currently playing The Outer Worlds).
That seems a little high, maybe 3-5 degrees more than I would expect. Nothing to be too concerned about (After all it works like an electric heater in the winter right?) I wonder what thermal compound you used? You could perhaps have another stab at that with something different. However, having typed that, 1.375v on a 7nm chip seems a fair chunk of electrons to be throwing at it. It's just possible you ended up with a Ryzen that's on the hot side of normal.
Take the side panel off and see what the max gaming temp is and report back if you don't mind. I have a 1700 and even at 4ghz and 1.38v it wouldn't probably hit 77c gaming with an NH-D15. The D15 should be max 1-2 degrees cooler than your already excellent NH-U12A unit (I looked it up). I would expect, with both my CPU fans fitted, 72-4 degrees. I have neither gamed nor overclocked in some months though so I can't be sure, and we haven't accounted for ambient.
The 580 should produce less heat than my Vega too. I think there may be room for improvement in your 77 degrees.
My GPU actually runs cooler than my CPU when gaming! My RX 580 usually sits at around 70C when gaming.
I'm using the Noctua NT-H1 thermal paste that came with the cooler. I usually use Arctic MX-4 but I doubt there's much difference between the two. My case also has 3 x 120mm Cougar intake fans and another 120mm as an exhaust (I did have two exhaust fans, but one failed recently).
I'll double check, but I don't recall seeing a normal option in the voltage menus. I left it all on auto as I'm pretty sure the other option was manual. I'm now waiting for Asrock to hurry up and release the latest BIOS with AGESA 1.0.0.4 update (Asrock have always been a little slow).
There was a fair amount of confusion about this around launch. Basically, if your BIOS settings are sane, it's likely normal and/or some software isn't taking the readings properly. More information here: https://www.reddit.com/r/Amd/comment...3rd_gen_ryzen/
77C does seem a tad high for load temperatures though, assuming it's a correct reading. Did you have a previous CPU to compare? If the heatsink is pulling in warm GPU air in a poorly-ventilated case for example, it would be hard to get decent temps regardless. Also make sure PBO (precision boost overdrive) is disabled in BIOS as it's effectively an overclock.
No "effective" about it, that is an overclock setting and IIRC invalidates the CPU warranty by enabling it.
My 3D printer has a Noctua fan on the hotend, they are getting an infamous reputation for being whisper quiet but not actually doing the job of cooling the heatbreak. It could be you simply need more rpm on that fan in the profile.
The Noctua fans spin up to near their limit when the CPU starts getting toasty (I think the max is 2000rpm for these fans). I didn't know about PBO invalidating the warranty. Asrock have the PBO setting as "auto" by default so I'm not 100% sure if it's being used or not.
It shouldn't be on. Precision Boost is a performance setting that should be on. Precision Boost Overdrive is an overclock setting that wacks the heat right up for minimal performance gain. The two seem to get commonly confused, which given the naming isn't surprising really :(
Is Zen3 going to be compatible with AM4 Mboards?
Two big things here:
1) AMD is still making 2XXX ryzen chips, and until recently was still making 1XXX parts
2) They're now selling rebranded 2600s as 1600s, for 1600 money. That's a tough value offer for any modern chip to beat
https://www.youtube.com/watch?v=wRO_AUdmfis
Yeah I just watched that.
I wonder if this is one of the AMD pro parts that they promised 5 year availability for, in which case re-marking a 2600 as 1600 is a tad naughty.
But still, that's cracking value if you can find one. I just had a look and could only find 1600 AE parts in the UK.
Amazon have been selling the AF in the uk for a while, I'd post a link but I don't think I'm allowed cos affiliate links and all. Just look at the product details section (been on sale since september appatently). Does this mean the zen 14nm line is dead? does this also mean that mobile chips are going to share the same series name as well (...soon?)?
Interesting, I could only find:
http://amazon.co.uk/AMD-Ryzen-1600-D...dp/B06XNRQHG4/
which claims to be YD1600BBAEBOX (not YD1600BBAFBOX)
But yes, it sounds like 14nm parts are no longer made, but this is new 12nm silicon. Maybe AMD had some 12nm wafer starts to use up, maybe they had a warehouse full of 2600 parts and decided to remark them to justify a lower price. Who knows.
Edit: Found one...
http://amazon.co.uk/AMD-CPU-RYZEN-16...dp/B07XTQZJ28/
but that's more expensive than a 2600X let alone a 2600 so I'm not so tempted :)
http://amazon.co.uk/AMD-Processor-Wr...dp/B07B41WS48/
https://www.amazon.co.uk/AMD-CPU-RYZ.../dp/B07XTQZJ28 Here you go. Abet without a lower price than 2600's
lol, both dug it out at the same time.
Noticed it wasn't in the top selling list though, which is amazingly AMD dominated these days.
https://www.amazon.co.uk/gp/bestsell...ers/430515031/
Desktop Renoir CPUs are listed:
https://videocardz.com/newz/amd-ryze...cessors-listed
Good leak.
The top 65W 8C/16T APU should be quiet interesting, but I also guess it will demand a very high price as it's the first time AMD have had such a high end APU.
Will be interesting to see how the monolith compares to 3800X when matched to fast DDR4 as the laptop APUs really don't have much tweakabilty.
However, that's the second one of his videos I've watch and I really don't get that Igor guy, or his website.
I have a bad feeling if the rumoured Zen2 refresh is locked to B550,the new APUs will also be locked to B550.
Looks like the 300 series chipsets might not work with Renoir:
https://www.reddit.com/r/Amd/comment...esvery_likely/
Apparently the existing ASRock Deskmini will need to be refreshed for Renoir.
Edit!!
Also looking at pricing,it seems the B550 motherboards are more entry level X570 pricing,looking at where they are being pitched. It would not surprise me if A520 motherboards get close to £100.
The Ryzen 7 4700G is spotted:
https://www.techpowerup.com/267172/a...essor-pictured
AMD has locked it out on purpose by not giving the microcode out. Also Zen2 refresh and the APUs use Zen2 cores,so there is zero reason to lock them out from B450,as the CPU base microcode is the same. All the laptops with Zen2 APUs use 400 series chipsets.So if they start locking out a 100~200MHZ higher Zen2 and the APUs,then it's just segmentation.
Edit!!
Also if you listen to the latest GN video,its a load of bunkem,as there are 32MB B450/X570 motherboards which work with Zen+ anyway.
I was thinking of getting a Ryzen 7 3700x, is it worth waiting now until the next series comes out?
Ok thanks, I might wait a while and see what develops
I currently have a GTX1080 FTW2, was thinking about an RTX card and wondering if its worth selling mine and getting something like an RTX2070 which has slightly better performance and has RT, though not entirely sure of the benchmarks with RT turned on
Read some reports that could possible be September 2020, likely?
Obviously lol :)
Indeed. It's worth bearing in mind that DDR5 will be mega bucks over the same capacity DDR4 when it first comes out and will probably take a very long time for the price gap to approach DDR4 prices. IIRC DDR4 was twice the £/mbyte over DDR3 when it first came out and it took 5 years plus to be within 10%
Essentially a decision to go B550/Zen 3 means any upgrade to a new architecture requires new CPU, motherboard and RAM. Waiting for Zen 4 in 2022 means paying through the nose for memory.
It's not an easy decision. If I personally had a 1700x and x370 I would not even consider the B550 until the Zen 3 equivalent comes out and see how it performs for real. If it does hit the rumored 20% uplift and that 20% were to apply to my specific workloads I'd just save the money on a new motherboard and also get the cheaper 3700x
One comment on the price they estimate: "I'd say about 200, a steal if it comes at that price."
A pretty dumb comment. So an APU with 8 Cores and graphics will be cheaper than an 8 Core CPU without the silicon for graphics that runs at the same base clock but lower boost.
Of course it would be a steal if it's some imaginary stupidly low price!
The thing is I can see Zen4 is on 5NM and might double the core count,so even if works with DDR4,I can see a real issue with memory bandwidth.
So AMD doing this stunt with B550,makes it essentially another limited chipset. I also don't trust them to not release an X670....then proceed to make up some excuse why X570 motherboards don't work fully. I really hope AMD doesn't pull the whole Zen2 clockspeed bump,as being only for B550 and X570,because that would be taking the mickey.
It can depend on the application, with some being more memory-hungry than others. The creator of y-cruncher has made some interesting blog posts about this, especially when using AVX instructions, even on server platforms with many channels of RAM. http://www.numberworld.org/y-crunche....html#2018_7_2
A bit of an update regarding 400 series support: https://www.reddit.com/r/Amd/comment..._amd_x470_and/
So I could get a X470 or B450 and still get to use Zen 3?
From my understanding of the post, it's a possibility, but has some limitations. If you don't already have a 400 series board, a 500 series would probably be a safer bet IMO.
I would just wait for a B550 motherboard.
See they are only a month away, will see what pricing is like
Ryzen 9 4950X spotted:
https://videocardz.com/newz/amd-ryze...4-6-ghz-listed
https://www.youtube.com/watch?v=84OkOLzRPxY
https://www.youtube.com/watch?v=84OkOLzRPxY
The Ryzen 3 3300X when tuned does very well in WoW.
Zen2 refreshed leaked:
https://videocardz.com/newz/amd-rumo...d-3750x-coming
Also no 4000 series APUs on B450/X470.
1usmus says the end of October this year for the Ryzen 9 4900/4950 and closer to the new year for the other SKUs:
https://twitter.com/1usmus/status/1263733833851179009
https://twitter.com/1usmus/status/1263735698810646528
Was just listening to a GN video on responses from AMD about the whole B450 AM4 thingy, IDK AMD had gone on record saying AM4 is going to be with us until DDR5, if that's delayed it would be nice as i could get more than 2 generations out of my X570.
There is more testing from that channel:
https://www.youtube.com/watch?v=X6RSEU1d-g8
https://www.youtube.com/watch?v=X6RSEU1d-g8
So the problem is not only the latency in Zen2 but the IF bandwidth. Even overclocking a Ryzen 7 3800X to 5GHZ won't help that much!
Some more leaks about the desktop Ryzen 4000 APUs:
https://adoredtv.com/biostar-outs-47...us-e-variants/
So the Ryzen 3 4200G will have 4 cores and the Ryzen 5 4400G will have 6 cores.
AMD will be adding an XT moniker to the new Zen2 refresh CPUs:
https://videocardz.com/newz/amd-to-a...t-on-june-16th
It also looks like,mainstream will be stuck with RDNA1,as RDNA2 will be segmented to higher end models:
https://hardwareleaks.com/2020/05/23...avi10-refresh/
So looks like if you are a mainstream buyer of graphics cards,its going to be a longer wait if you want RT. Makes me wonder whether consoles will look a better alternative! :(
If their new top of the range is indeed over double the die size of a 5700XT, then you can expect that to cost rather more than double the cost of a 5700XT given how fewer of them you can cut out of a circular wafer.
That's instantly not a product for me, but I wish them luck.
There is another Navi 23 which is apparently around the same size of Navi 10 or a bit larger. Think of an RDNA2 version of Navi10 and apparently the stuff which is responsible for the RT capability does not add massive die area either. I suspect it probably is the same GPU configuration from the PS5.After all the console SOCs,are hardly massive,once you consider the area for CPU dies,etc.
So the problem is if Navi 10 is kept at the existing price,ie, around 5600XT/5700/5700XT performance for £250~£350,and the true 5600XT/5700/5700XT successor,is then priced above £350. Remember,the original leak was the RX680 being rebadged as a 5700XT,with a price bump,since the RTX2070 was a bit underwhelming. This is the same stunt Nvidia pulled with Turing.
If the Navi 23 cards replace the 5600XT/5700/5700XT at current pricing with better overall performance,RT,etc it's fine,and the Navi 10 cards dip into the market under £250,ie,where the GTX1660 Super and RX5500XT is,it would be OK.
But the problem,is both companies want to push the mainstream pricing upwards and upwards,way past any real inflation and with Nvidia now making 65% margins,with 2/3 of their revenue from gaming. The irony is we need Nvidia to be aggressive on pricing with Ampere,because if they are not,I am uncertain AMD will be. I would love to be proved wrong,but we will see!
So for me if they want to charge £350+ for a 5700XT with a bit better performance and some RT capability,they can keep it then. If this happens,I will need to question whether a console next year would do me for the immediate future. The PC will do for all the older games I play.
Presumably all the small/lower margin RDNA2 parts are going to console, so they can only really sell high margin binned large die parts to PC, alongside old process refresh chips (assuming the RDNA1 refresh will be using the same process as RDNA1). Once new process production/yields ramp up they'll be able to offer the smaller RDNA2 parts to PC as well.
I don't like it (gives nVidia scope to increase price of mid range parts) but I can understand it from a making best use of your raw materials point of view.
It will be interesting to see how spaced out the product stack from nVidia comes as a result of this - there could be some competition in the high-end space so we might get a number of cards within a small price range of each other.
They are all 7NM AFAIK,which should be quite mature by now,and GDDR6 should be cheaper now than it was in 2018/2019. This is why Nvidia made some new mainstream parts with GDDR6,as GDDR5 became more expensive!
AFAIK,people were thinking about 7NM EUV,but it apparently seems to be more a slight improvement to the current 7NM. However,the current 7NM is a few years old now,so it probably is what TSMC will be transitioning over to anyway. A bit like Intel fiddling with 14NM. I would love AMD to do what they did with the HD4870 and HD5870,but I have a feeling it's going to be very dependent on how aggressive Nvidia is.
The Asus B550I Strix is $229:
https://smallformfactor.net/forum/th...i-strix.13522/
So with VAT you can get the X570I mini-ITX motherboard for the same price. It also uses an active cooling fan. So are the A520 mini-ITX m,otherboards going to cost the same as B450 mini-ITX motherboards??
Edit!!
The B550 pricing is very high:
https://www.anandtech.com/show/15810...pearing-online
$134 for the ASUS Prime B550M-A,which is a basic motherboard,ie,around £130 with VAT.
So below that have rubbish A520 motherboards with no overclocking.
https://www.chiphell.com/forum.php?m...e%3D1&mobile=2
Quote:
RYZEN 9 3900XT - 4.1GHz base, 4.8GHz boost.
RYZEN 7 3800XT - 4.2GHz base, 4.7GHz boost.
RYZEN 5 3600XT - 4.0GHz base, 4.7GHz boost.
An extra 200MHz boost? Nice. I find my base clocks are nonsense, but the numbers going up 300MHz hopefully means it does actually clock higher.
That ties in with the Linus Tech Tips I was watching earlier saying AMD dropped the price of the 3900X by $100 which they said would have rather changed the conclusion of their 10900K review.
Zen2 refresh clockspeeds have been leaked:
https://videocardz.com/newz/amd-ryze...-up-to-4-8-ghz
Upto 4.8GHZ boost clockspeeds.
My mates has got about £350 for a new build, what is the best combo he can get on AMD?
Any chance he could get something that's going to support Zen 3 too?
I've earmarked the 3600 and 16GB of 3200MHz RAM on Scan, that comes to £254.
I believe a B450 board will support Zen 3
Seems expensive. I picked up a new 3600 for £145 recently, and you can buy 16GB 3200 LPX RAM for ~£70 on Amazon
Both via HUKD
Given the rumours of the 3000 series XT chips likely to launch before Zen3, it might be worth hanging on a wee while.
Could have sworn the 3100 was £75 on amazon before it sold out.
Crucial ballistix ram is £65 on amazon for the 3000mhz. I like mine and would be reasonably confident I could oc them to 3600mhz.
Is it generally acknowledged that memory controller performance/reliability goes up as you go up the Zen2 stack - or is it fairly consistent across the range (accounting for silicon lottery)?
I'm more curious than anything.
Could have sworn the 3100 was £75 on amazon before it sold out.
Crucial ballistix ram is £65 on amazon for the 3000mhz. I like mine and would be reasonably confident I could oc them to 3600mhz.
https://www.amazon.co.uk/Crucial-Bal...0572951&sr=8-1