Vega 12 spotted in open source driver code:
https://www.phoronix.com/scan.php?pa...Vega-12-Posted
I wonder if it is a new GPU for desktop or laptop??
Printable View
Vega 12 spotted in open source driver code:
https://www.phoronix.com/scan.php?pa...Vega-12-Posted
I wonder if it is a new GPU for desktop or laptop??
That article confirms Eypc launches before Ryzen next year on 7NM. Also AMD will have the first large production 7NM GPU in Vega20 for commercial usage. I think they have given up until Navi to try and compete with Nvidia in PC gamin graphics TBH! :(
I find it hard to believe that it's sold at a loss at this stage TBH, it could just be propaganda?
https://www.anandtech.com/show/13210...on-pro-wx-8200
Apparently Hynix was the original choice for AMD like with their previous gen but were forced to go with Samsung instead. From what I read Hynix was meant to be more cost effective but I don't know now as Hynix were meant to have it available early last year IIRC,and apparently that is why Vega was so delayed!
As AMD did a lot of the development/validation of HBM with Hynix it's always possible that Hynix will (eventually) give them a good deal?
It looks like the Bethesda collaboration has gone to crap already:
https://www.youtube.com/watch?v=gAMkNwOYhDA
So that looks like AMD has given up on all major studios now with regards to games. They still have not learnt a thing have they?? They really need to get games better optimised for their CPUs,and if they can't even keep a collaboration going with Bethesda from last year who owns the iD Tech engine,then WTF are they doing??
Sounds like they're still no putting enough resources in to RTG.
With almost 100% console dominance you would think it would be fairly easy for them...That's why I'm wondering if some of the recent Vega deals, like an RX V64 for £440, are possible due o hynix RAM.
Deal isn't on now so I wonder if they sold quickly at that price and they jacked it up.... It was a pre-order so perhaps we will know in a few weeks when one gets GPU-Zd. It was a Powercolor Red Devil.
Edit: Still on at overclockersuk:
https://www.overclockers.co.uk/power...=affiliate/tag
Powercolor RD 56 for £399
https://www.overclockers.co.uk/power...=affiliate/tag
MSI Airboost 56 for £440
https://www.ebuyer.com/821669-msi-ra...CABEgJmMPD_BwE
There's a good mix of stuff under the Bethesda label though.
I imagine that the id stuff has top-notch programmers especially in terms of engine.
The best own-made stuff - yes I'm talking TES and Fallout here - on the other hand...
Some ancient old buggy engine which was poor 10-15 years ago never mind now.
Far more significant to me is that DICE seem to be very central to Nvidia and RTX after for years being closely involved with AMD. So much so that they were originally one of the prime movers behind Mantle and my implication DX12 and Vulkan
I guess they're just following the industry buzz words and technology. They knew Nvidia would be able to create serious hype-train with ray tracing and RTX so they jumped on board and grabbed a table seat.
Prior to this, Nvidia hasn't really pushed a technological advantage for a long time, AMD had no option but to push technology.
Another Vega on offer
Gigabyte 64 for £439 (ex delivery)
https://www.overclockers.co.uk/gigab...gx-19n-gi.html
Not sure if true:
https://wccftech.com/exclusive-amd-s...to-become-ceo/
OFC,being Wccftech I wonder how much of it is commentary?! ;)
Well this is pretty major. GloFo abandoning cutting-edge nodes and therefore requiring renegotiation of the WSA. I wonder if AMD have known about this for a while, hence 7nm TSMC announcements for Vega, Navi and Zen2?
https://www.anandtech.com/show/13277...nm-development
https://www.anandtech.com/show/13279...s-gpus-at-tsmc
I made a new thread about it since its big news:
https://forums.hexus.net/pc-hardware...tion-tsmc.html
I honestly think 7NM Vega and Epyc will ship first. 7NM desktop Ryzen might get delayed methinks or we see a 12NM refresh as the main concern will be volume,unless AMD has secured enough.
Also,I wonder if this means the PS5,etc will be back to two chips - maybe a 12NM CPU and a 7NM GPU connected via IF??
It looks like a 2C/4T Ryzen is being released for desktop:
https://videocardz.com/77816/amd-2nd...-slides-leaked
AMD is also using the higher performance TSMC HPC 7NM node too:
https://www.reddit.com/r/Amd/comment...ther_than_soc/
There is also a rumour ATIC is going to sell GF:
https://www.bitsandchips.it/english-...lobalfoundries
About time too. The 2C/4T configuration has been on the roadmap for ever, and the Athlon 200GE naming has been rumoured for almost as long as we've had 2000-series APUs in retail! IMNSHO this is actually the perfect configuration for mass-market mainstream desktops (particularly the kind that major enterprise customers fill their offices with) - which they obviously realise given it's getting a 'Pro' version.
Plus it'd make a sweet HTPC chip too.
Would be nice to see a ~£50 mini-ITX AM4 mobo to go with that....
Tell me about it. I was looking at B450 ones and I would need to spend £125 to £150 for a reasonable one and I want a Ryzen 5 2600.
OTH,the ASRock B350 ITX one is £85:
https://www.amazon.co.uk/ASRock-Mini.../dp/B073BFTJQK
However,it would probably need a BIOS update for the APUs.
Confirmed: https://www.amd.com/en/processors/athlon-and-a-series
:D
EDIT: looks like almost everyone but Hexus has a story on it already ;)
https://www.tomshardware.co.uk/amd-a...ews-59121.html
https://www.anandtech.com/show/13332...n-200ge-55-usd
https://www.techspot.com/news/76295-...lon-200ge.html
https://www.overclock3d.net/news/cpu...a_3_graphics/1
Hmm,it looks BFV with RTX prefers more threads:
https://www.pcgamesn.com/amd-battlefield-5-hardware
Quote:
“What we have done with our DXR implementation is we go very wide on a lot of cores to offload that work,” explained Holmquist, “so we’re likely going to require a higher minimum or recommended spec for producing RT. And very wide is the best way for the consumer in that regard, with a four-core or six-core machine.
“We haven’t communicated any of the specs yet so they might change, but I think that a six-core machine – it doesn’t have to be aggressively clocked – but 12 hardware threads is what we kind of designed it for. But it might also work well on a higher clocked eight thread machine.”
The PC we used to play the RTX-enabled version of Battlefield 5 on was running an older Intel six-core CPU with 12 threads, but not a Coffee Lake chip. Still, hitting 12 threads on an Intel processor can get expensive. On the AMD side, less so.
Picking up a Ryzen 5 2600 would give you enough processing grunt to accompany your Nvidia RTX 2080 Ti graphics card. Though if you’ve spent $1,200 on a new GPU chances are you’ll want at least a Ryzen 7, of course you might not have any cash reserves left after picking up the new Turing monster…
Hopefully that means decent clocks then!
I suspected that might happen. I think I said in another thread, I wonder if Samsung are interested?
It would be really quite amusing if CPUs can capably offload some of the ray tracing workload given the cost/marketing/die size of the RTX cards!
That's... almost bizarre!
I wonder what's causing issues with 14nm supply at Intel? It's now a very mature node so I wouldn't have thought anything like yields. Bigger die sizes vs previous SKUs could indicate fewer dies per wafer but the difference isn't that drastic (about 122 vs 149mm2 for 6700 vs 8700) and I wouldn't have thought Intel would be right up against wafer capacity limits for that to cause noticeable supply issues?
I wouldn't have thought they would convert fab space from 14 to 10 to the point of constraining supply, knowing the issues they're having though? Unless it's a case of increasing demand but being unable to meet it due to 14nm capacity not increasing to match? But given release after release of Skylake it's not like the 14nm continuation was a last-minute panic either?
Ignore this comment
AMD announced some extra Ryzen CPUs:
https://www.anandtech.com/show/13343...500x-and-2300x
Two new 'E' at 45W each including the 8C-16T 2700e and the 6C/12T 2600e and two new 'X's are 4C/8T and 4C/4T.
To hit 45W, the 2700E has obviously suffered a lower base, but the max turbo is also lower
https://www.reddit.com/r/hardware/co...2018_deutsche/
AMD confirms it’s working with Microsoft on the future of cloud gaming:Quote:
Devinder Kumar - AMD Chief Financial Officer at 2018 Deutsche Bank Technology Conference Call
Transcript
7nm node:
Q: You mentioned a little bit about the process technology, so why don’t we check that box as well. Last week or the week before, we saw global foundries throwing the towel on the 7-nanometer node. Talk a little bit about, holistically, your view on how AMD uses different foundries and what that change means via your WSA?
A: Yes. So if you go back to the context, and I know we talked about in the 2016 timeframe. When we laid out the multigenerational roadmap in terms of server, data center, commercial, we talked about having access to leading edge process technology. In 2016, we modified the WSA with GlobalFoundries and that gave us the flexibility in terms of having access to leading edge process technology. If our products are on time, we want to make sure that process technology was not a constraint in terms of introducing the products to the customers. And that’s exactly what the 2016 modification was about. As it turns out, it played out. Today, as we sit here, TSMC has done a very good job with execution on the 7-nanometer technology node.
We said back in 2015 and said in 2018 timeframe, we think our competitor is going to have. We thought they already have the 10-nanometer node out there. And we were prepared to go ahead and have our 7-nanometer products in the 2018 timeframe. We've stayed the schedule, their schedule has slipped. Today, with the GlobalFoundries evolving their strategy from a process technology standpoint, we are targeting all the 7-nanometer products at TSMC. And like I said earlier, sampling the GPU 7-nanometer second half of this year later this year; and then going ahead and launching it this year; and then in the server CPU space, launching that in 2019. So that’s playing out exactly as we had targeted and we’re very pleased with being able to stay on track with process and product technology.
Semicustom (consoles):
Q: That provided a ton of great revenue; whether it was the Sony side, the Microsoft side; now you have a Chinese game counsel builder as well; but great revenues to allow you to have the operating and earnings to invest in these other areas. But how do you think about the semicustom, going forward. Is that something that should be declining over time as this generation of counsels has peaked out? Or are you optimistic that there is going to be refreshes and/or new versions of semicustom opportunities?
A: We like the semicustom model a lot. Semicustom model is one of those; as you observe the game consoles, you win the designs; some of the engine and the expenses gets depraved by the input from the customers; we go ahead and get the chip out; and after that, it’s a mutually exclusive deal where you can predict revenue. Going back to 2012, 2013 timeframes, we’ve had predictably somewhere between $1.5 billion to $2 billion of revenues coming from the game console business, both Sony and Microsoft and that has allowed us to invest in exactly the roadmap that is delivering right now. We like that business a lot. We are competing for the next generation product. But Sony and Microsoft have to make their decisions and then taken we'll take it from there. But we like it a lot from an overall standpoint.
GPU excess stock, competition & Turing:
Q: Last question on the graphic side of the C&G. How do you view the competitive environment? Now that Nvidia has Turing out, it seems like they would at the very least, introduce a new high price point but push the prices down for their last generation chips. And that might be more of a direct competitive comparison for you/ Are you seeing any changes in the competitive dynamic?
A: The view is, first of all, the introduction of the product, the timing is very interesting. I think both companies are seeing elevated levels of graphics inventory in the channel space. We need to work through that over the next one or two quarters. And then obviously the ASPs for the new product that comes is very high. And I think the volume -- only when you get to the volume skews is they're going to be a benefit from a new product standpoint. We continue to have a roadmap in terms of introducing the 7-nanometer GPU for the data center, because that’s where the largest opportunity is for us from revenue and from the profit standpoint, and we’ll come out with the product from the competitive standpoint. I feel pretty good from a competitive standpoint in the graphics space. We have gained market share, overall, over the last 12 months or so, going below 30% or 33% and we'll continue to be comparative as we look forward from here.
7nm consumer GPU:
Q: And the absolute last question on the graphics side, 7 nanometer Vega coming to the data center side of it, you've talk about that before at the end of this year. When should we expect 7 nanometer to occur on the more traditional gaming…
A: We haven’t missed that piece. I think, if you look at it from what we have stated, we have 7 nanometer data center GPU launching later this year; we are sampling the 700 CPU this second half of ’18 and then launching in 2019; after that, we'll have the client piece of it; we haven’t been specific about the timing; and graphics will be coming out later than these products.
GPU computing ecosystem:
Q: NVIDIA, we talk about CUDA and the ecosystem around the programming to do the GPU computing side of things. How do you compete with that ecosystem from a software perspective?
A: I think, first of all, we have to invest in that area, which we have continued to invest. You’ve seen OpEx go up for the company and the largest area of investment is R&D. And in R&D the largest area is machine learning and software, that’s an area of investment. We have the hardware obviously coming out. We are investing in a big way on the software side of it. And then the other thing that I think is going to play out is the Open Source as opposed to the way CUDA works. And if you go back and look at literature, not in the financial columns and all of that, in the technical literature working with mega data center customers in particular, because they like the open software solution too, and now there’s a lot of discussion even by a competitor about open software as opposed to continuing with CUDA forevermore.
https://www.pcgamesn.com/amd-microsoft-cloud-console
Apparently AMD Rome scores and picture of the chip leaked:
https://wccftech.com/amd-epyc-rome-7...enchmark-leak/
AMD Zen 2 to have PCI-E 4.0:
https://tyrone.tech/amd-to-be-early-...and-navi-gpus/
The Chinese Zen based games console hands-on by DF:
https://www.eurogamer.net/articles/d...hinese-console
I hope so, I'm sure they lost a lot of sales with their slow adoption of PCIe3 even though it didn't matter much to graphics performance.
It will no doubt be adopted by NVMe storage though, which again won't matter a jot to everyday experience but will show heavily in benchmarks.
Computer based speculated on that last week too:
https://www.computerbase.de/2018-09/...-vega-20-rome/
Seems AMD are keen to be the first to release PCI-E 4.0 CPUs but with 5.0 so close behind, with a bit more might have been a good idea too!
AMD are being fairly quiet about their consumer roadmap. It's nothing unusual, but they're quite open about EPYC and the 7nm pro Vega card...
Or have I missed something?
I think consumer CPUs might take a bit longer to appear methinks as AMD might prioritise commercial products first,so it might be possible with limited capacity Intel might sneak out their "new" refined 10nm based products to desktop in a similar timeframe now.
He has an update on the previous article:
https://www.semiaccurate.com/2018/09...cess-problems/
Quote:
SemiAccurate has a little more on Intel’s 10nm woes, this one is actually goodish news for once. It involves a technical point that we told you about earlier and how it is used.
In our earlier exclusive outing of Intel’s 10nm problems we laid out 4-5, depending on how you count, issues with the process. Since then we have gotten new information on one of them, more specifically how it is used and where. The up side to this information is that the new downgraded ’10nm’ process from Intel will not take as big a hit from the removal of this tech as SemiAccurate said earlier, but it will still take a hit.
I'm pretty sure they've already explicitly stated that Zen 2 will be commercial/enterprise first? Don't forget that - AFAIK, at least - EPYC isn't getting a 12nm refresh. We had new consumer CPUs from AMD less than 6 months ago, and that generation still hasn't been full released (we're still waiting on the 2500X and 2300X). They're not going to start announcing 7nm consumer CPUs while the 12nm ones are still relatively new...!
It makes less sense on the consumer side where we've had Polaris, a very limited push of Vega, then nothing. Then again, Polaris released slightly behind ... erm, was GeForce 10 Pascal? ... and we've not had the next gen cards from NVidia yet, so perhaps it's not that surprising that we've not heard much about AMD's GPUs...
What I meant is that the roughly one year cadence might be missed - we saw 12NM Ryzen launch around 12 months after the launch of the first 14NM Ryzen SKUs,so you would expect Q2 2019 for 7NM Ryzen. I suspect we might seeing 7NM Ryzen based CPUs/APUs more towards 2H 2019. It would not surprise me if we see the APUs launch before the CPUs too,as there are now 12NM Ryzen APUs yet,and laptops would benefit more from power consumption reductions and are less dependent on high clockspeeds. ATM,we don't really know how TSMC 7NM HPC clocks,let alone the volume currently.
Also,have people noticed Nvidia was apparently going with Samsung 10NM/8NM over TSMC for 7NM,which hints perhaps volume is an issue,since GF was meant to be responsible for consumer Ryzen CPUs IIRC.
Edit!!
BTW,I have heard one or two things about current clockspeeds for Rome,and where some of the improvements might be targetted at.
I would hazard a guess,that reducing IF power consumption(also meaning they can run it "faster") might be a big focus of the 7NM shrink,rather than out and out clockspeeds,especially if they are using 4 core CCX units still,as using more CCX units will mean more demands on the IF.
https://www.pcgamesn.com/amd-asus-gi...shortage-china
"AMD motherboard shipments are tight over in China, leading to mass shortages as major suppliers’ factories can’t keep up. But, unlike Intel’s 14nm shortage, which has been impacting most of Intel’s supply chain, this looks to be a symptom of a sudden explosion in demand for AMD motherboards across the country."
Take with a ton of salt:
https://mobile.twitter.com/witeken/s...43796660387840
2020 for desktop 10nm parts??
200GE reviews:
https://www.techspot.com/review/1698-amd-athlon-200ge/
https://www.youtube.com/watch?v=pCDJERMTL3s
https://www.youtube.com/watch?v=J4dXAeBNRqY
https://www.youtube.com/watch?v=5KG7mj48kNU
Retailers have put up listings at between £50 to £60.
Not bad for a CPU which only has a 35W TDP and also has AVX2 enabled.
It stings a bit seeing it behind the A12-9800 in games, another GPU execution unit or 2 wouldn't have hurt; enough to keep it behind the 2200G but give it a bit more grunt. That said, for a basic PC this looks like a great choice, especially when you consider what else you could drop into the socket in future.
So the CPU's straight-up faster than the A12-9800, and the overall package is faster in most games, at half the list price.
It does all that while drawing less power than any other socketed desktop CPU tested, and well under half the peak power of the A12-9800.
And the clock-bumped 220GE and 240GE are going to be faster still.
Noice! Wouldn't mind one of them in my HTPC eventually.... ;)
EDIT:
Interestingly only some games though - I count 3 wins for the Athlon and four for the A12, with one tie. But then again, this isn't an A12 9800 replacement - that's the Ryzen 3 2200G, which monsters the A12 at the same just-under-£100 price point. The fact that this generation's dual core entry level processor is competing with the previous generation's flagship is pretty good news IMNSHO...
EDIT 2: just because it occurs to me that *technically* the Ryzen 2 2400G is actually the A12 replacement (i.e. top-tier APU), but of course it's so much faster that they can afford to price it higher than the A12 ever reached....
Huh, I've just checked on PriceSpy and they reckon that the A12 9800 was retailing between £140 and £150 from launch up to August 2017, so the 2400G really is a direct replacement :o
Which makes it even more impressive that the Athlon 200GE trades blows with it so closely, IMO...
Apparently there is a 12NM Polaris 30 being released soon:
https://wccftech.com/amd-radeon-polaris-30-gpu-family/
*shrug*
Wonder if this is aimed at laptops.
OTOH, I notice you can get a 4GB 580 for £200 on Amazon atm
https://www.amazon.co.uk/Sapphire-Sa...dp/B0797XLR9Z/
I would take this with a grain of salt:
https://hardforum.com/threads/the-ra...ample.1967802/
Quote:
The RTG has just received its first Zen 2 sample (to optimize for) and it's really impressive.
8C/16T
4.0 GHz/4.5 GHz
DDR4-3600 CAS 15
Radeon RX Vega 64 LE
__________________________
The good: It's already nibbling at the Core i7-8700K.
The bad: It crashes a lot.
The ugly: It crashes all the time. Some of the tests have to be run multiple times because they crashed before finishing.
Quote:
Apparently, there has been some changes to the "interconnect" (wherever that is) that requires RTG to make changes to the video drivers and that why RTG is getting the sample.
Edit!!Quote:
never said that Zen 2 would only have 8C/16T, only that that particular sample has 8C/16T.
There may or may not be more cores. I don't know.
I know next to nothing about AMD's processor teams.
Aghh,just as I bought the bits for my upgrade.Oh,well.
I can't see it being out before around March '19 anyway.
As mentioned in that thread, I wonder how power consumption will look for the 8C Skylake vs Ryzen/Ryzen2? I can't see Intel getting much more out of 14nm and each Skylake revision has driven power consumption higher.
Well I just bought a Ryzen 5 2600,Asus B450 ITX board,and decided to get some 3200MHZ RAM(the price was much cheaper than elsewhere and I will probably sell the 2400MHZ stuff so it should hopefully be more palatable). Its just one of those situations where my old setup is having some issues now(its getting on a bit).
Any benchmarks which I should run to compare?? So will be moving from a Xeon E3 1230 V2/Core i7 3770 to a Ryzen 5 2600.
Well the issues were more the motherboard having some niggles,and performance in some non gaming situations starting to be an issue. If I was doing it for Fallout 4 Intel would have had the edge but I am hoping it does help although i am looking at getting a PCI-E M2 SSD at some point,but nearer to the Black Friday week.
This will be the first time I have changed platform since 2011!!
Interested in how it works out Cat
Due to time restraints I can't run too many benchmarks so they will be:
1.)userbenchmark
2.)Cinebench R15
3.)7 zip
4.)DxO
5.)HWBOT X265
6.)Fallout 4
I will try and run ROTTR if possible too.
Edit!!
Will run the X265 HD benchmark as I got some errors with the HWBOT one.
Not sure I can do the FO4 comparison now. Despite having a saved profile for NMM it had a hissy fit and deselected the mods and since I only selectively enabled some mods in an order I would have to remember what I did for loads of them. I did try doing it but the game seems to look a tad different now so fun times ahead. However in terms of draw calls,etc it's around the same. The FPS is not always massively higher but it's smoother even in scenarios where the FPS is lowish. Looking at the Task Manager I can see quite a bit of SSD activity. Methinks the SSD access is more consistent now.
However on the flip side the RAM I got from OcUK is Samsung B die.
The Budget ARENA is where i'm looking, right now in the uk a R5 2600 costs £150 that is sub $200. I have a 1200 that cost me £100 back in July last Year and can be bought for about £40 used or about $60 used. Now will Zen2 bring us higher core count on this enter level segment or keep with their current Zen trend, If we get 7nm what speeds should we expect. I'm hopeing we hit the 5Ghz boundary on air but is unlikely. Any thoughts?
I just got a Ryzen 5 2600 for £137,so AMD is delivering decent value in that area.
I think AMD will probably have a 16C 7NM Ryzen 3 or perhaps even a 12C one. But the problem is Intel has upped the price of its top consumer CPU the 8C Core i9 9900K to £450.
I can see AMD initially release a Ryzen FX with 16C at £400 to £450,and probably a 12C at £300 to £350. This might mean we won't see any significant increases in core count at the pricing tiers I would normal buy CPUs at.
However,TBH I actually am less interested in more cores,but more in core IPC and clockspeeds and more importantly improvements to IF.
IF consumes a significant amount of power relative to the cores in Ryzen.
https://images.anandtech.com/doci/13...00Ka_575px.png
https://images.anandtech.com/doci/13...700X_575px.png
The reason why Ryzen likes faster RAM is because IF is clocked somewhat lower than it can be,most likely to save on power consumption and cooling.
AFAIK,the next Ryzen has a number of improvements in this regard,ie,if they can increase the speed at which is clocked at,etc it could make a big difference especially for games. Many games are latency dependent and Ryzen has relatively high memory-CCX latency.
Well remember the Ryzen 7 1800X came out at £499 since the equivalent Intel CPU was nearly £1000. The thing is AMD to a degree is reactive to Intel pricing,so will price in relation to Intel and its been a thing for a very long time. So I can see them fully taking advantage of Intel upping prices,and remember doubling the cores and improving single threaded performance,at the same price as a 9900K would be huge. Then if Intel drops prices,AMD will and so on. This is how things used to be,unlike what happened for years. My main concern is regarding TSMC volume - 7NM is probably oversubsribed so its hard to say how many AMD 7NM CPUs will be available at launch.
OFC,AMD might say,lets really do it to Intel,and price the top 7NM Ryzen CPU much lower,but we will see.
Either way,you have a reasonably solid(and new) CPU with decent speed RAM,so you can get another year or more out of it. I expect as 7NM volume improves,we will see more competitive pricing as long as exchange rates are OK.
OK I got the original mod list back,so can do the Fallout 4 comparisons.
I was looking for an appropriate thread to post this but it doesn't seem like Intel have posted about the new Intel lineup yet.
I see Intel have announced that some of the new lineup will include a soldered IHS (and are selling it as a feature lol). The bit that kinda surprises me is this makes it down to the six core part the Core i5-9600K, which makes me wonder if this is a salvaged 8-core die? I can't see the 6 core being functionally any different to its predecessors so it would seem strange to change the production process for what is effectively a single mid-range part (at least, with fewer threads and lower clocks than its predecessor).
I have posted my Ryzen 5 impressions in this thread:
https://forums.hexus.net/pc-hardware...n-rebuild.html
Power Delivery Affecting Performance At 7nm:
https://www.reddit.com/r/Amd/comment...rmance_at_7nm/
This is why I think 7NM Ryzen won't clock as high as people think,and AMD will concentrate on other areas first.
Aside from AMD being interested very very keen to get back some server marketshare, and despite their very public positive comments about TSMC 7nm, it is of course possible that all this time they had planned servers and low power stuff for TSMC but had expected the high frequency stuff to have GF's 7nm available for it.
Still, if 7nm doesn't buy them higher frequencies what they have to sell for desktop?
As the 65w 8C/16T and various EPIC server chips shows, the current Zen is already very very power efficient. Can't see 7nm bringing that down to 40w or so making Ryzen more popular for desktop usage.
Every node in recent history has had tons of similar articles published covering some difficulty or other, whether that be leakage, electromigration, thermal density, resolution, defect density, you name it - I wouldn't take this as a particular stand-out cause for concern IMHO. Every node brings with it new challenges, which is precisely why design costs continue to rise.
It's another consideration and a part of a bigger picture, but it's a bit like worrying about performance of an upcoming sports car because a news company has discovered the engine has a lower redline, while missing out the fact that it also has a higher displacement, a turbocharger, and more cylinders...
Design challenges are part of the reason nodes didn't just skip from say 130nm down to 5nm because they knew they wanted to get there eventually - challenges are encountered, lessons learned, and solutions discovered, making progress along the way.
WRT Zen2 - firstly remember it's not the same core as Zen so they won't be relying purely on clock speed for performance uplifts. AMD acknowledged some low-hanging fruit which they didn't manage to complete in time for Zen and I imagine most of this will be destined for Zen2.
Also, the extent of the power delivery issues isn't clear, and I doubt Zen was pressing up against any walls to begin with given other factors seemed to be limiting its clock speed, and it could just be that the first gen 7nm will struggle to increase power density, so relaxing density for critical areas of the core could be one possible workaround at the cost of some die space. Intel have done something similar with their 14++ process being less dense than the first gen, so it comfortably clocks higher, but for a number of possible reasons.
Again, it's a real stretch to become overly concerned about snippets of information like this taken in isolation.
The interesting bit was the comment about wire resistance. A couple of decades ago, although new nodes gave new problems the worsening wiring resistance was hugely offset by improved capacitance to the extent you got huge clock speed rises for the same power which along with double the transistors for the same silicon cost made it worth chasing down the problems. So yes, there have always been comments that silicon was getting harder but there was never any doubt that the gains would be worth it. Easy clock increases went away some time back, we don't get double the transistors any more on a new node and design costs increase the design risk and the quantity of chips you need to sell to make it worthwhile. I have to wonder if GloFo saying they have had enough of chasing smaller nodes is the start of a much more gradual process improvement rather than a forced chasing of Moore's Law to try and make it self fulfilling.
A 386 had 275K transistors. On 1.5um that allowed 20MHz, at 1um you could get 33MHz, AMD got their core to 40MHz at 0.8um. If modern parts scaled the same way, a 4GHz cpu at 16nm would become a 6.5GHz cpu at 10nm and 8GHz at 7nm; but sadly they don't.
We haven't seen routine clock speed increases for quite a long time now so that's nothing particularly new, and we're already seeing cutting-edge nodes becoming more specialised and a less obvious choice outside of where performance is critical.
We've also gone from seeing GPUs being pipe cleaners for nodes to it being smaller, low-power mobile SoCs.
Then the problems intel have had with 10nm, relaxing the pitches to get it out the door, and to a lesser extent 14nm suffered some setbacks too and we didn't really see it on desktop in real volume until Skylake.
I still think all the people saying the 7NM Ryzen CPUs being 5GHZ ones,is setting themselves up for a dissapointment.
AMD is most likely going to increase the core count,increase IPC,improve AVX throughput and of more importance is trying to get IF speed up. The only reason Intel is pushing clockspeed is because they already have a tried and tested core and a very mature process node. AMD trying to target very high clockspeeds for deskop Ryzen makes no sense,especially since AMD Rome is not going to be uber high clockspeed either,and I would make the argument on a brand new process node,aiming for lower clockspeeds makes much more sense,in terms of yields.
Yeah I agree and I'm not expecting a massive clock speed uplift TBH - it's generally not the most effective way of improving performance relative to power consumption nowadays and I'd imagine server performance is high on the priority list for Zen2. Desktop gaming is one of the increasingly few areas where it's sensible to lose so much efficiency for a bit of extra performance.
Also, anyone making any sort of decision or judgement based on clock speed alone for a new core is massively over-simplifying things and/or just wants a bigger number for e-peen, as usual, with no real understanding of what they're talking about, and can be safely ignored in any sensible discussion.
WRT the Zen2 core - what's everyone's predictions of the 'low-hanging fruit' AMD spoke about? I'm guessing Fabric speed/latency, AVX width, maybe changes to L3 from victim to inclusive with prefetchers if they think it's worthwhile? Obviously there are likely to be a load of other changes besides, but they seem like obvious targets, and while it was IMHO quite sensible to give AVX width a lower priority on the first generation core, the new node will buy them transistors to compete with Intel in that particular area. Overall the core seems like a very good, well-balanced one (from my understanding anyway) and has plenty of strengths vs competition as-is.
At a higher level, maybe they'll increase the number of cores per cluster?
IF would be a big target especially if due to power requirements they needed to downclock and as AMD adds moar cores,it makes sense to work on that. AVX throughput too as that might be important,especially for some commercial situations - maybe also rejigging the caches too as you suggested?? They could increase the cores per CCX,as for the APUs they do need to get six or eight core ones out just to be able to keep up with Intel,but if they do improve IF they could simply add more CCX units instead.
From what I've been reading (not finished reading it all yet) it seems Intel miscalculated the wire resistance thing, there was limited talk about not being able to manage the resistivity of Cu interconnects under a certain size, however from reading the linked forum post on semiwiki it seems the problems were overplayed, at least on the distance were talking about inside a CPU, some research shows it doesn't become a problem until the 3nm mark.
Basically Intel decided to go with cobalt before they really needed to and it's cause them all sorts of problems, FYI: TSMC has stuck to Cu for all their interconnects (TSV and wires) and i guess that's why they've not seen the sort of problems Intel has, most of Intel's problems (afaik) come from using Co as its got a very different thermal expansion coefficient from Cu, it seems it's OK for TSV but not for wires.
Ah that's interesting! I'll try to have a proper read though it later.
CB updated/changed their CPU with 2080Ti review under the headline:
"CPUs von AMD & Intel im Test: Unsere Testergebnisse waren falsch"
That's i our results were wrong.
https://www.computerbase.de/2018-09/...e-rtx-2080-ti/
After the usual suspects in a certain American forum promoted they original version as some kind of Intel triumph
It's a WIP.
The quote from the article is actually rather funny:
Quote:
Der Kopierschutz Denuvo, der in vielen Spielen auf maximal 5 aktive Systeme in 24 Stunden limitiert, hat das aktuell aber noch verhindert, da parallel auch an weiteren Artikel mit denselben Spielen gearbeitet wird.
So it seems this silly copy protection system hinders their benchmark efforts due to the 5 system limit. Guess CB are not big enough to buy multiple licences.Quote:
The copy protection system, Denuvo, limits many games to a maximum of 5 system activations in 24 hours, and this is what actually/currently limited us, as we are also working on further articles using the same games.
Of course, they've also hinted that they are working on their Intel 9000 series reviews.
Rather suspect that neither the 2600X results or the Intel 9000 results will perform any differently than where their clock speeds would dictate in most cases as 6C/12T is enough for most games. The i9-9900K (@ the hugely inflated £600 mark) will gain most of most likely tiny lead over the i7-8700K purely from the clock speed. Clock-for-clock it might actually lose a bit - there is a reason why previous Intel 8+ cores chips didn't use ringbus.
EDIT:
Actually, while I do expect the new Intel 9000s to be 'the best' gaming CPUs, I wonder whatever happened to all the GPCGMR and Intel enthusiasts who were urging everyone to get the 6700K or 7700K (or even worse the i5 equivalents) versus the Ryzen 1600 or 1700 just a short while ago?
At the time that was after a good few people had said how much their mins and consistency had gone up with their Ryzen 6C/12T or 8C/16T while playing multi-player BF versus their older i5/i7. Think that was around the same time that CB ran their 6/8/+ core gaming reviews just before Ryzen showed.
Also, min frames and consistency was of course the main benefit of Mantle and DX12 which was mostly ignored by the same crowd.
It does, but given that was also an AMD design decision I'm not sure it's a better reference point! But AMD have been doing this modular 4 core cluster through 3 architectures now (bulldozer was effectively a modular 4 (int) core architecture, as was Jaguar). The only real difference in Ryzen is how they're linked together.
At last, someone other than me has said this ;) Can't believe how many people have been talking about increasing the cores per CCX, which would be completely contrary to the whole point of modular design. The only thing that would make sense for AMD is increasing the number of CCXes, and the only real question in my mind is how many...
Take this with a grain of salt:
https://twitter.com/BitsAndChipsEng/...94745647165441
So Zen 2 has a 13% IPC increase in scientific tasks?
I've definitely thought of the possibility, but three clusters does seem like an awkward amount for layout on one die, so I'd be surprised if we saw that. Also, increasing the number of cores wouldn't subtract from the modularity at all really, just you're working with bigger building blocks brought about, in part, by the smaller node.
Very impressive if true, and would put it somewhat ahead of Skylake in those workloads. Intel don't generally clock the server parts to the hilt either, so it could help make up for the gap.
AMD could likely use a decent leapfrog in FMA throughput though as that's one area Skylake still pulls ahead per-core, and the server Skylake core is wider still.
But as much as fanboys would protest otherwise, Intel don't seem to be on the same pedestal they once were, performance wise. As much as some like to do a load of hand-waving and claim the two are incomparable, Apple's ARM cores look to be doing extremely well now (and like mobile processors in general, have been improving significantly year-on-year for a while). Yes, there are differences to be considered, but take a look at the SPEC numbers:
https://www.anandtech.com/show/13392...icon-secrets/4
https://www.anandtech.com/show/12694...rver-reality/7
And don't forget that's comparing a ~5W mobile core to a full-on x86 core drawing many times that.
I don't think 4 cores per CCX is anything to do with silicon layout, I think it is to do with optimal traffic to and from the shared L2 cache. Two cores per CCX would mean more snoop traffic between modules, 6 cores per CCX would increase contention on the L2 cache. That balance may change if they drastically redesign the cache in some way, but chances are if there is a sweet spot it won't really move.
OTOH, if a new node gives you more transistors to play with you can just place a third CCX into the top level design, hook it into the fabric and tell the tools to layout the chip. That is kind of the whole point of the fabric to be able to do that, and automated layout tools aren't fussed by there being three of something.
To add to what DwU's already said, it's not that changing the module subtracts from the modularity per se, but it does somewhat defeat the object of designing a reusable module.
The only reason to change the basic CCX structure is if they can't get sufficient performance out of the fabric to link more modules together - but since they're already running up to 8 CCXes across the fabric and are happy with the overall performance, that doesn't seem to be a problem.
A lot depends on how the finances are looking, but I could easily see AMD moving to maybe 3 dies on 7nm - a 3 or 4 CCX CPU die, a 2 CCX + big IGP desktop APU, and a tiny 1 CCX and IGP mobile APU. Sticking with a 4 core CCX makes that much easier (and just imagine a 4C/8T APU going up against Intel's gutless dual-core Pentiums…)
@DanceswithUnix & scaryjim: What I mean is, I imagine laying out a die with three CCXs would be awakward (as far as I know anyway) and silicon layout tends to be fairly symmetrical for a few reasons. For three modules you have a choice between three in a line maybe with fabric routing around the outside (but a fairly long distance between either end) or in an L shape with some dead area in one corner. Maybe they could arrange the uncore into that part but, personally, I don't see it as a likely layout. I'm not professing to be right though, it's just doesn't feel right to me.
WRT the sweet spot of cores per module, it's not necessarily set in stone, and could change as a result of e.g. a different node, core design or changes to the Fabric.