OK I got the original mod list back,so can do the Fallout 4 comparisons.
I was looking for an appropriate thread to post this but it doesn't seem like Intel have posted about the new Intel lineup yet.
I see Intel have announced that some of the new lineup will include a soldered IHS (and are selling it as a feature lol). The bit that kinda surprises me is this makes it down to the six core part the Core i5-9600K, which makes me wonder if this is a salvaged 8-core die? I can't see the 6 core being functionally any different to its predecessors so it would seem strange to change the production process for what is effectively a single mid-range part (at least, with fewer threads and lower clocks than its predecessor).
I have posted my Ryzen 5 impressions in this thread:
https://forums.hexus.net/pc-hardware...n-rebuild.html
Power Delivery Affecting Performance At 7nm:
https://www.reddit.com/r/Amd/comment...rmance_at_7nm/
This is why I think 7NM Ryzen won't clock as high as people think,and AMD will concentrate on other areas first.
Aside from AMD being interested very very keen to get back some server marketshare, and despite their very public positive comments about TSMC 7nm, it is of course possible that all this time they had planned servers and low power stuff for TSMC but had expected the high frequency stuff to have GF's 7nm available for it.
Still, if 7nm doesn't buy them higher frequencies what they have to sell for desktop?
As the 65w 8C/16T and various EPIC server chips shows, the current Zen is already very very power efficient. Can't see 7nm bringing that down to 40w or so making Ryzen more popular for desktop usage.
Every node in recent history has had tons of similar articles published covering some difficulty or other, whether that be leakage, electromigration, thermal density, resolution, defect density, you name it - I wouldn't take this as a particular stand-out cause for concern IMHO. Every node brings with it new challenges, which is precisely why design costs continue to rise.
It's another consideration and a part of a bigger picture, but it's a bit like worrying about performance of an upcoming sports car because a news company has discovered the engine has a lower redline, while missing out the fact that it also has a higher displacement, a turbocharger, and more cylinders...
Design challenges are part of the reason nodes didn't just skip from say 130nm down to 5nm because they knew they wanted to get there eventually - challenges are encountered, lessons learned, and solutions discovered, making progress along the way.
WRT Zen2 - firstly remember it's not the same core as Zen so they won't be relying purely on clock speed for performance uplifts. AMD acknowledged some low-hanging fruit which they didn't manage to complete in time for Zen and I imagine most of this will be destined for Zen2.
Also, the extent of the power delivery issues isn't clear, and I doubt Zen was pressing up against any walls to begin with given other factors seemed to be limiting its clock speed, and it could just be that the first gen 7nm will struggle to increase power density, so relaxing density for critical areas of the core could be one possible workaround at the cost of some die space. Intel have done something similar with their 14++ process being less dense than the first gen, so it comfortably clocks higher, but for a number of possible reasons.
Again, it's a real stretch to become overly concerned about snippets of information like this taken in isolation.
The interesting bit was the comment about wire resistance. A couple of decades ago, although new nodes gave new problems the worsening wiring resistance was hugely offset by improved capacitance to the extent you got huge clock speed rises for the same power which along with double the transistors for the same silicon cost made it worth chasing down the problems. So yes, there have always been comments that silicon was getting harder but there was never any doubt that the gains would be worth it. Easy clock increases went away some time back, we don't get double the transistors any more on a new node and design costs increase the design risk and the quantity of chips you need to sell to make it worthwhile. I have to wonder if GloFo saying they have had enough of chasing smaller nodes is the start of a much more gradual process improvement rather than a forced chasing of Moore's Law to try and make it self fulfilling.
A 386 had 275K transistors. On 1.5um that allowed 20MHz, at 1um you could get 33MHz, AMD got their core to 40MHz at 0.8um. If modern parts scaled the same way, a 4GHz cpu at 16nm would become a 6.5GHz cpu at 10nm and 8GHz at 7nm; but sadly they don't.
Millennium (14-10-2018)
We haven't seen routine clock speed increases for quite a long time now so that's nothing particularly new, and we're already seeing cutting-edge nodes becoming more specialised and a less obvious choice outside of where performance is critical.
We've also gone from seeing GPUs being pipe cleaners for nodes to it being smaller, low-power mobile SoCs.
Then the problems intel have had with 10nm, relaxing the pitches to get it out the door, and to a lesser extent 14nm suffered some setbacks too and we didn't really see it on desktop in real volume until Skylake.
I still think all the people saying the 7NM Ryzen CPUs being 5GHZ ones,is setting themselves up for a dissapointment.
AMD is most likely going to increase the core count,increase IPC,improve AVX throughput and of more importance is trying to get IF speed up. The only reason Intel is pushing clockspeed is because they already have a tried and tested core and a very mature process node. AMD trying to target very high clockspeeds for deskop Ryzen makes no sense,especially since AMD Rome is not going to be uber high clockspeed either,and I would make the argument on a brand new process node,aiming for lower clockspeeds makes much more sense,in terms of yields.
Yeah I agree and I'm not expecting a massive clock speed uplift TBH - it's generally not the most effective way of improving performance relative to power consumption nowadays and I'd imagine server performance is high on the priority list for Zen2. Desktop gaming is one of the increasingly few areas where it's sensible to lose so much efficiency for a bit of extra performance.
Also, anyone making any sort of decision or judgement based on clock speed alone for a new core is massively over-simplifying things and/or just wants a bigger number for e-peen, as usual, with no real understanding of what they're talking about, and can be safely ignored in any sensible discussion.
WRT the Zen2 core - what's everyone's predictions of the 'low-hanging fruit' AMD spoke about? I'm guessing Fabric speed/latency, AVX width, maybe changes to L3 from victim to inclusive with prefetchers if they think it's worthwhile? Obviously there are likely to be a load of other changes besides, but they seem like obvious targets, and while it was IMHO quite sensible to give AVX width a lower priority on the first generation core, the new node will buy them transistors to compete with Intel in that particular area. Overall the core seems like a very good, well-balanced one (from my understanding anyway) and has plenty of strengths vs competition as-is.
At a higher level, maybe they'll increase the number of cores per cluster?
IF would be a big target especially if due to power requirements they needed to downclock and as AMD adds moar cores,it makes sense to work on that. AVX throughput too as that might be important,especially for some commercial situations - maybe also rejigging the caches too as you suggested?? They could increase the cores per CCX,as for the APUs they do need to get six or eight core ones out just to be able to keep up with Intel,but if they do improve IF they could simply add more CCX units instead.
From what I've been reading (not finished reading it all yet) it seems Intel miscalculated the wire resistance thing, there was limited talk about not being able to manage the resistivity of Cu interconnects under a certain size, however from reading the linked forum post on semiwiki it seems the problems were overplayed, at least on the distance were talking about inside a CPU, some research shows it doesn't become a problem until the 3nm mark.
Basically Intel decided to go with cobalt before they really needed to and it's cause them all sorts of problems, FYI: TSMC has stuck to Cu for all their interconnects (TSV and wires) and i guess that's why they've not seen the sort of problems Intel has, most of Intel's problems (afaik) come from using Co as its got a very different thermal expansion coefficient from Cu, it seems it's OK for TSV but not for wires.
DanceswithUnix (15-10-2018),watercooled (15-10-2018)
Ah that's interesting! I'll try to have a proper read though it later.
CB updated/changed their CPU with 2080Ti review under the headline:
"CPUs von AMD & Intel im Test: Unsere Testergebnisse waren falsch"
That's i our results were wrong.
https://www.computerbase.de/2018-09/...e-rtx-2080-ti/
After the usual suspects in a certain American forum promoted they original version as some kind of Intel triumph
There are currently 2 users browsing this thread. (0 members and 2 guests)