Agreed it's a real shame that - it was one of a few sites to do thorough, scientific articles and their own independent take on things. Granted, it was awkward having to read it through translate, but it was good enough that it was worth the effort.
Printable View
Agreed it's a real shame that - it was one of a few sites to do thorough, scientific articles and their own independent take on things. Granted, it was awkward having to read it through translate, but it was good enough that it was worth the effort.
https://www.nextplatform.com/2018/06...tacenter-ring/
Quote:
McCarron says that AMD’s share of the server space hit rock bottom at the beginning of 2017, with a mere 0.3 percent of shipments based on the prior Opterons. McCarron says that in the wake of the Epyc launch, AMD was able to garner around 1 percent share in the first quarter of 2018, and even though the second quarter still not done, he is forecasting that Epyc volumes are on a power of 2 ramp and will double every quarter for a while. AMD has been very clear that it hopes to exit 2018 with 5 percent shipment share and then build to a respectable double-digit share – that can be anywhere from 10 percent to 99 percent, of course – in the years after that. McCarron says that 5 percent by the end of 2018 is absolutely doable.
Quote:
“Our plan for the Naples-Rome-Milan roadmap was based on assumptions around Intel’s roadmap and our estimation of what would we do if we were Intel,” Norrod continues. “We thought deeply about what they are like, what they are not like, what their culture is and what their likely reactions are, and we planned against a very aggressive Intel roadmap, and I really Rome and Milan and what is after them against what we thought Intel could do. And then, we come to find out that they can’t do what we thought they might be able to. And so, we have an incredible opportunity. Rome was designed to compete favorably with “Ice Lake” Xeons, but it is not going to be competing against that chip. We are incredibly excited, and it is all coming together at one point. We have reintroduced ourselves to the market, gotten the initial traction and wins, we got the initial customer support, and we validated that AMD is a safe choice with an effective processor. With the Rome processor and process, we are going to be in an incredible position going forward.”
Interesting snippet picked up by WCCF:
https://wccftech.com/amd-7nm-vega-gp...oduction-tsmc/Quote:
GF made the size of its 7-nm pitches and SRAM cells similar to those of TSMC to let designers like AMD use both foundries. AMD “will have more demand than we have capacity, so I have no issues with that,” he said of AMD using the Taiwan foundry.
It seems this has not been widely reported by the tech press:
https://forums.hexus.net/pc-hardware...nes-china.html
Wrong thread.
Some interesting discussion about some patents which might point to what we will see in future AMD GPUs:
https://www.reddit.com/r/hardware/co...to_the_future/
Ryzen 3 2300X leaked:
https://www.techpowerup.com/245676/f...-2300x-surface
The last paragraph of that article is horrid tosh, and shows a complete failure to understand product segregation. The point of this CPU over the 2200G is the 500MHz higher boost clock. That's a big enough plus to make it a better choice to pair with dGPU. As to enabling SMT on them … what do they think the quad core Ryzen 5s are?! There's already an SMT-enabled APU, and I strongly suspect the 2300X will be accompanied by an SMT-enabled Ryzen 5 2500X. In other words, the parts they think AMD should make already exist.
Seriously, what has happened to journalism in the tech world? I've read so many articles recently that … well, if they were in print I'd've ripped up the page, I think!
An interesting thing I was reading about recently is that the IGP on the 2200G and 2400G has a clock wall around 1200-1300mhz and actually you would do much better to start off at 1400mhz overclock.
I've been wondering that for a long time.
The incentives around selective sampling and NDAs do corrupt the industry though - if you don't play ball with some companies, point out the good stuff and don't mention the bad stuff, you lose your free review samples and your page hits with it. Companies love 'journalists' who are basically just doing free marketing for them, so sadly you often have to go looking for the more interesting stuff.
Edit: That last bit is pretty irrelevant in this context but I wanted to get it off my chest! :P
I thing is, I don't of many computing magazines with the same sort of coverage as tech websites. I used to subscribe to PC Pro and Custom PC years ago though, maybe I'll pick them up and see what they're like now!
Most computing review sites can't even test simple things like games properly. I still remember when Nvidia introduced its non-deterministic boosting system in Kepler,I could foresee it giving issues,since I knew how reviewers tested games,in open air test rigs in air conditioned offices,with short sequences. Lo and behold,most review sites took weeks after the initial reviews to only cotton on how it was causing issues and had to change how they tested things. TBH,it was not even the fault of Nvidia,as they had indicated how it worked. The reason why AMD caught up was probably not due to drivers IMHO,but more down to how the testing changed.
Then CPU testing games in areas which are not even CPU intensive,since they have not even bothered to actually research the games they were testing. Its why so much testing of FO4 was utterly pointless.
Also remember the Handbrake thread?? That even came out of WTF results from one review site.
Or of course, the classic 'turn resolution down to 800x600 because it's representative of future performance LOL'. Which, predictably, turned out to be utter nonsense.
Yep,1080p I could understand but then they make games which are not really CPU limited,look CPU limited and then games which are CPU limited,they don't test properly so the tests mean nothing.
Even to this day very few test sites bother testing more than one area in a game,let alone showing the test sequence so you don't know what they are testing!!
It was like with the DX12 and Vulkan testing done which was utterly fail in many cases. Lets test it with the fastest CPU possible,when you should be testing with weaker CPUs which is where DX12/Vulkan helps.
So WoW got a DX12 update which does not work on Nvidia graphics card,but still Nvidia in DX11 is faster after Computerbase.de tested it:
https://www.computerbase.de/2018-07/...x-12-1920-1080
However,it would be nice if they used proper scales.
https://i.imgur.com/2306qux.png
https://i.imgur.com/cminm4F.png
Even though the RX580 is registering less FPS,look at the frametime plots - the GTX1060 graph if magnified for DX11 seems to show it see-sawing,whereas the RX580 seems to be somewhat flatter,however Computerbase.de have used different scales for both graphs even though they are the same physical size. So 70 vs 30(factor of 2.33) on the y axis and 1.813 and 1.432(factor of 1.27) on the x axis.
If you equate the scales together.
https://i.imgur.com/TBUdnCX.png
Also something else even more intriguing. It seems WoW looks very CPU limited on the GTX1080 under DX11(Core i7 8700K is 19% faster than a G4560),whereas the Vega64 under DX12 looks mostly GPU limited(Core i7 8700K is 5% faster than a G4560).
I would love to see some additional testing during raids which is where the game is mostly CPU limited.
Some of the comments about the RX580 seem weird:
https://www.eurogamer.net/articles/d...omment-7273018
They say the RX580 "needs" a higher end CPU to perform,but look on YT,etc and there are tons of videos,showing otherwise,and then saying the RX580 gets driver optimisations later?? Look at Destiny 2 for example - launch day performance on AMD cards was fine. Something like WoW?? It seems a rather vague statement TBH!
You can get some oddities, like the minimum frame rates here on a Vega 64 coupled with a 2200G: https://www.bit-tech.net/reviews/amd...00g-reviews/6/
and whilst you wouldn't build a setup that unbalanced from scratch, until I update the CPU in my system that is sort of what mine has evolved into. Average frame rates in VR are clearly higher than with my old R9 380, yet somehow Elite Dangerous feels worse (though I hadn't played for a while so that might be something in the 3.0 or later update).
There are plenty of videos where an rx580 with a G4560 or Core i3 8100 still is faster than a GTX1060.
It's the same with FO4 - I am CPU limited but drops are more noticeable at times since the average is now higher. I have had a GTX960,RX470 and GTX1080 - minimums are the same.
So if I am at a constant 60FPS and I suddenly get to 30FPS it's more noticeable than say 40FPS to 30FPS.
I'm seeing a lot of articles claiming Intel is abandoning/giving up on Phi. I'm wondering if that's perhaps a little unfair? They're discontinuing a fair chunk of their products without a replacement which is admittedly a bold move, but they're still producing some variants?
I also wonder if their GPU development renders some of the lineup redundant, though that's likely still a while off yet so I doubt that's a significant reason. 10nm still being unsuitable for mass production, particularly for huge monolithic dies like Phi probably has a lot to do with it, but that hasn't stopped them from doing refresh after refresh on the CPU side?
Probably a good idea to take this with a few truck loads of salt but if true it would be something.
Quote:
AMD's Zen 2, 7 nm Chips to Feature 10-15% IPC Uplift, Revised 8-core per CCX Design
BTW,I am going to ignore Buildzoid from now onwards,as even though he is knowledgeable he is pushing way too much hyperbole about Ryzen motherboards and its causing more issues than its solving. The guy is literally making people on forums like OcUK and Reddit think they need £150 motherboards to run a Ryzen 5 even at stock or with a mild overclock otherwise the VRMs will go kaput,when they are hardly an FX8350 FFS.
This is despite plenty of sub £1400 prebuilt systems from OcUK,HP,Acer,etc running Ryzen 5 and even some Ryzen 7 systems with motherboards like this:
Asus Prime B350-Plus(4+2 phase VRMs with cooling)
Asus Prime B350M-A(4+2 phase VRMs with no cooling)
or this:
https://i.imgur.com/cBsSKTY.jpg
https://i.imgur.com/cBsSKTY.jpg
https://support.hp.com/us-en/document/c05634309
https://support.hp.com/us-en/product...ment/c05521044
He is literally making Ryzen look more expensive than it needs to be,and at the same time hardly anyone is putting effort into looking at lower end Z370 and B360 motherboards which probably suffer from the same "issues".
It wouldn't surprise me a lot of lurkers look at all the paranoia around the B450 motherboards,etc and then not bother with Ryzen,ie,they "need" a £150+ motherboard,expensive RAM,etc since they can hear nothing bad about a £70 B360 motherboard and Core i5 8400 combo which will cost much less.
What's buildzoid? Google shows results for Gamersnexus - if they're the same thing, I've never paid much attention to them anyway, I found they often get weird results compared to other places, complain about everything even when it's a storm in a teacup (as seems to be the case here), and are yet another site to induce facepalms when they get technical details wrong.
He is YouTuber who analyses VRMs on motherboards,etc so is a useful source of information,but at the same time its like saying a £100 500W PSU which gets 9.8 on JG is better than a £45 500W which "only" gets 8.9,and that you must look at the £100 since it scores higher. Sure the £100 PSU is better when pushed,but its not like the £45 will necessarily be useless for a lot of builds.
I seriously get the impression too many tech reviewers think everyone who builds a PC wants to get an overclocked Core i7 8700K or a Ryzen 7 2700X. Just because a cheaper motherboard cannot take an overvolted 105W TDP Ryzen 7 doesn't mean its not suitable for someone running a Ryzen 5. After all how many motherboards couldn't take an FX9370 or FX9590 but would work fine with an FX6300 or FX6350 for years??
Another good quarter for AMD:
https://www.marketwatch.com/story/am...eat-2018-07-25
TBH I find that attitude is quite common on the net - some people refuse to accept that anything less the top-end parts have their place. Plenty will convince you that anything less than a fancy brand of tool is useless and not worth buying, which is plain money wasting if you're only going to use it once or occasionally. Just throwing money at a problem, particularly when it's totally unnecessary, really isn't as smart as people like to pretend.
The thing is if you are going to criticise VRM designs for a whole tier of boards,at least say what they will run and what they won't. Someone buying an £80 2200G is not going to care if their £70 board can run a 2700X overclocked. They want to probably know if the 2200G runs fine and maybe something like a Ryzen 5.
A lot of enthusiasts especially AMD fans on Reddit,and various forums need to understand the difference between being an enthusiast and being pragmatic. They might be willing to spend £150 on a motherboard,but most gamers and DIY builders,etc are lurkers and will assume that means Intel is cheaper,since the Intel boards are not getting the same scrutiny and by extension are fine,even though the cheaper motherboards are also reduced spec in many ways.
So a £160 Ryzen 5 might look good value but not if you "need" a £150 motherboard and £250 RAM set,and the £150 Intel Core i5 is "fine" with a £90 motherboard and £160 RAM set. AMD fans will throw money at perfecting the setup but Intel is more trusted anyway(hence why Spectre/Meltdown is not really affecting consumer sales of their chips) and if it looks cheaper and less finicky,they will still get more sales,especially since £100 to £200 is quite a lot of money and is the difference between a GTX1050 and a GTX1060,or the latter and a GTX1070.
Its bad enough Ryzen is more pickier about RAM than Intel,now they are adding stuff about motherboards,especially when Ryzen is far more efficient than the FX series. There was even a £40 780G motherboard which could run an FX8320 fine FFS,and someone tested it a lot on a forum once and it was fine. So now much more expensive motherboards have problems,with more efficient CPUs?? Really??
In the end AMD fans then wonder why Intel might be selling more CPUs still,despite Ryzen having "better value" and a "longer term" platform.
Its basically an own goal.
I don't think a lot of current 'enthusiasts' really understand pragmatism! Complaining about cost on one had, then spending several hundred pounds on a huge case, RGB lighting, excessively large PSU, way over-specced motherboard, £150 AIO CPU cooler... then running a stock CPU...
The current fad of AIO coolers says a lot about the current market, a solution which found a problem in Intel's awful IHS thermal path. I still facepalm over it, coming from the days when you used a £20 tower cooler if the stock cooler was too noisy for you or you wanted to overclock. I still do exactly that. The market is full of 'showing off' by throwing money and fairly lights (almost literally) at things as a way to differentiate. Personally I find it far more interesting to see what can be done e.g. on a limited budget, in a small case, or with more unconventional or DIY builds. 'Enthusiast' means different things to different people, some who are 'enthusiastic' about spending money and showing off, others who are interested in hardware, how it works, etc. I'm firmly in the latter camp, and I'm just as interested in e.g. low-power hardware as much as top-end stuff. Sometimes more so, for the amount of engineering that goes into some limited power budget systems.
The 'art' of the electronic and hardware design itself is far more interesting to me than a bunch of fairly lights and garish designs.
Interesting information about Rome: https://www.anandtech.com/show/13122...fabbed-by-tsmc
Edit: When they were talking about insufficient capacity at GloFo I expected the GPUs would be the thing to move back to TSMC! It's definitely an interesting move - perhaps TSMC's 7nm process is favourable for CPU performance, and AMD are able to fill their GloFo quota with GPUs?
I also wonder if this means the server and desktop dies will be separated by the fab they're produced at? If anything I would have expected the higher-clocked desktop parts to see TSMC production, but then again I wonder if it's worth the effort of having two separate runs given HEDT volumes?
TR 2990X listed:
https://videocardz.com/newz/amd-ryze...nadian-dollars
It could be because TSMC is able to push out 7NM chips quicker and in more volume than GF?? I would imagine 7NM Vega is a much lower volume product than a 7NM CPU.
Yeah perhaps it's a timing thing. I did always view 7nm Vega as a sort of pipecleaner part, not that we've had stuff like that in a fair while since mobile SoCs seem to fill that role now.
I wonder what process will clock higher though? I would imagine clockspeed is less of an issue with TR MK3 than Ryzen MK3.
I have a feeling if the basic Ryzen module has been created with both TSMC and GF processes in mind,I have a strong feeling Ryzen 3 won't be hitting 5GHZ on the first go,as something has got to give,to make it work with both companies manufacturing without too much mucking around. I could be wrong OFC.
Apparently GF 7NM is meant to be similar to TSMC 7NM in basic characteristics though:
https://www.anandtech.com/show/12831...-skipping-5-nm
Threadripper 2990WX CPUZ specs leak, the takeaway is it's a 4.1Ghz (boosting) part with a 250W TDP.
If Ryzen 3 7nm can hit 4.5ghz at 8 or 12 cores I'll buy one. Otherwise, I may well skip it :)
I wonder what the logistics are like of using multiple companies for a process for similar designs. Must be HELL!
I would imagine it's not to bad as DRC have become so complex that it's probably like sending a CAD design to a manufacturer, theoretically you should get the same product no matter who's making it. That's not to say it's as good as doing it all in-house just that you know how it should not be made and from there you may need to tweak things a little, based on who's fabricating it, to get to your end product
It will be interesting to see how the 32 core TR2 parts pan out - two of the chips will have disabled memory controllers AFAIK. Having said that going from a 180W TDP with the 1950X with 16 cores to 250W TDP with the 2990WX with 32 cores is good going. Will be interesting to see what the power consumption is like.
BTW,it seems rather weird recently that AMD motherboards are getting far more scrutiny than the Intel ones when it comes to VRMs as Ryzen is hardly like BD. It does make me wonder,especially after SKL-X and the fact Intel is now releasing 8C CPUs, which will push VRMs even more than previous generations of SKL uarch derived CPUs.
The issue,is some of the stuff spread before the B450 launch was fear mongering about "split 4 phase designs" which was rather hilarious since such designs have been used back in the Bullozer and Phenom II days,so it doesn't mean all doom and gloom. I remember telling people years ago that an "8 phase" design being a doubled up one,which has their pros and cons.
In fact looking a the Techspot/HU analysis of the boards,it seems much has not changed but the biggest thing to look for is,VRM cooling.
A number of boards have poorly thought out VRM heatsinks and have huge plastic shrouds which causes them to get too hot.
Plus people also need to consider reference VRM designs are made for horizontal coolers as they blow air over the VRMs,so if you use a vertically aligned cooler or an AIO water cooler,you need to have good case airflow so the heat gets removed from the VRMs.
Some of the Intel CFL boards also are not up to the task,like the B360 board used in this MSI PC:
https://hexus.net/tech/reviews/syste...-a-8th/?page=3
https://www.msi.com/asset/resize/ima...8a9ba1/600.png
Those VRMs are probably not getting much airflow.
Look at the Core i7 8700 being held back. Put that Core i7 8700 in a better board and it will perform better. I just feel you need to scrutinise both platforms equally,otherwise someone might think you can use an Intel CPU in a cheapo board and it will be fine,and the opposite for AMD,ie,a false sense of security.
I don't disagree with anything you say, I just don't think people consider the implications of focussing on one brand or the other right now, they're just following the hype/clicks.
VRM cooling is my biggest gripe with all boards at the moment. Really isn't hard to stick a finned piece of metal on rather than a designer heat trap.
Yeah,it annoys me when they do that,and some of these so called VRM coolers and their shrouds actually look more of an impediment to installing coolers,etc in the first place too.
Edit!!
It seems the Gigabyte Aorus B450 has issues due to the stupid VRM cooler design with a Ryzen 7 under heavy load,and interestingly they are giving away an SSD with it worth £50:
https://www.hotukdeals.com/deals/gig...vatech-3001699
I suspect they know why! ;)
Still not a bad deal if you are going for one of the lesser chips I suppose,but it makes me wonder how much companies test things.
The form over function trend continues, sadly. Many companies seem to be focusing more on stylish heatsinks and fairly lights rather than the useful stuff like decent BIOS, quality VRM, etc.
Depends on whether you think there's been no progress made in VRM components over the last 10+ years. The range of TDPs on the two sockets is basically the same, and personally I'd kind of expect at least the efficiency of VRMs to improve (meaning less waste heat) and perhaps also that they'd be able to cope with more heat over a longer period of time.
With the same power running through the components, but better efficiency and longevity, you should need less cooling....
How easy is it to remove one of those plastic shrouds?
Less cooling I'm fine with. Intricately machined and engraved blocks of metal than dont expose surface area to airflow, not so much.
There is win-win situations available to the manufacturers, they can design cooling thats both effective and looks good if they want.
Most of these heatsinks look like they got some middle aged boy racer from the 80s or 90s, who used to stick on random bits of fibreglass onto their car, to design them.
Edit!!
They probably have as much functionality as the parts they stuck onto their cars!;)
Some of them are integrated into the backplate IIRC.
DwU will like this:
https://www.youtube.com/watch?v=iVUj...ature=youtu.be
https://www.youtube.com/watch?v=iVUj...ature=youtu.be
Look at the FX8350 in a number of those games when compared to a 2200G.
lol, it has been superb value for the £125 it cost and the years I've had it.
I'm just glad the thing runs games at all atm though. Was hoping to update this autumn, but now have an eyewatering car bill coming in (think top end threadripper and don't spare the high speed DDR4 sticks). Sometimes I think I should get cheaper hobbies, sadly my kidneys aren't in that salable a condition :)
Quite telling that the CPU usage on the FX8350 was only ever 50-60% whilst the 2200G was at 90%. Wasted potential?
My 1055T was a great CPU, handled games better than a lot of my friends would even believe! And as is often the case, it arguably outlasted its competitor CPUs in terms of modern gaming performance.
GamerNexus has a bit more than the standard specs and price of Threadripper 2, in the video Steve covers the topology, die arrangement, and a bit more.
https://www.youtube.com/watch?v=D8CRg-eWRn0
I would love an upgrade too - my old IB Xeon E3,is starting to struggle under DXO,and OFC after the secruity updates FO4 also dropped in performance. Also I have a mini-ITX system,so don't care about the higher TDP chips as much. RAM pricing OTH,is just stupid but you need to also not leave it too long,as exhange rates will be another consideration too,as unlike with consumer electronics,computing retailers tend to pass on any increases quite quickly,after looking at what happened last time.
I will see what Black Friday brings to the table. Either a Core i7 8700 or Ryzen 5 2600 methinks. Use potato RAM.
There are some rumours of 7NM Epyc having 4 SMT threads instead of 2,and CD from Semiaccurate had a very interesting article up and I wonder if this is down to Ryzen 3 having more threads per core:
https://semiaccurate.com/2018/08/07/...-they-know-it/
2019 is going to be interesting! Lets see how much of this is actually true and not clickbait! ;)
https://www.hardwareluxx.de/index.ph...t-rekorde.html
The 32 core 2990WX hit 5.3 GHz on ln2
https://cdn01.hardwareluxx.de/razuna...8419C452AD.jpg
Looking the AM4 mini-ITX boards they are an utter ripoff TBH especially as I don't really overclock anyway. Its a sad day when an overpriced Intel B series board seems better specced than a "midrange" B450 one.
That's another thing mini-ITX can be much smaller but idiot companies build stupidly large cases which make no sense,and I also still have a mini-ITX case.
I was looking at motherboards and the B450 are worst specced than B360 ones and cost more! I don't even overclock anymore so it makes no sense why they cost so much especially since AMD chipsets are less complex than the proper ones Intel has to implement.
If they are after the gaming market and allow for air around a 2 slot graphics card that's got some length as well as girth to it and the full ATX PSU to power the sucker then you have already used quite a few liters of space. Most of the uATX motherboard goes *under* that GPU, so the increase in case size can be minimal. Although some of the uATX cases are also laughably massive.
I was shocked my Vega 56 graphics card didn't fit in my Antec 300 full ATX case until I took a drill to it and removed all the 3.5" drive bays, I guess I'm out of the shoebox market until my next GPU upgrade :)
Most of the cases are poorly designed - the ones designed on HardOCP for example are small and have no issues with a card like the GTX1080.
Plus with SFX PSUs things are even more laughable with the lazy mainstream case designers.
My main PCs have been SFF since 2005 including Shuttles. Not had an issue with smaller cases so far! :p
I still remember having a massively overclocked high VID Q6600 in a Shuttle with an overclocked HD5850,a few HDDs,card reader and an optical drive. Lasted nearly 5 years!
With modern components being more efficient and with smaller drives it makes me wonder why companies are being out thought by people on a tech forum.
Edit!!
Even with mATX cases companies are lazy - they are way too big to the extent you can get small ATX cases like the CR1080 which are a similar size which is one of the few reasonable mainstream efforts and is not made by a mainstream company. The smallest ones seem to be again done by internet collaborations. I also want light and a number of mainstream ones either are way too overweight or use cheap materials.
On a side note - this year Nvidia is making around 65% gross margins,and nearly 40% net margins which is 2 to 3 times more than during the Fermi days and the start of the Kepler generations. People argued with me on forums saying nodes cost more which meant prices would rise,and the Titan cards were not the normal upper tier of Nvidia large GPU cards rebranded to a luxury tier. Yeah,see how that worked out.
The two are not exclusive though. If the cost of making a big die went up 10%, and Nvidia managed to squeeze people for 100%, they keep the difference. Foundries used to talk about cost per square mm across processes, now they talk about cost per transistor because that doesn't look as bad. But for things like graphics the whole point of moving to a new process is to get more transistors on there so the die size has to stay up.
I think their cheek in releasing a £1000 card and then later saying "but you can have this 'cheap' ti version for 'only' £700" is a stroke of evil genius which I am amazed they got away with.
People were trying to be clever - like I said at the time lots of other stuff is also made using fabs and consumers should worry about themselves not worry about justifying price increases.
This is why the Titan class worked - people who looked at the die sizes realised a Titan used a GTX580 class GPU but was rebranded as a luxury class,and that the 80/80TI card you got was really the equivalent of a GTX570 class GPU. But even now people argue that the Titan was a new class of GPU,etc and nothing has changed on tech forums then you get even Hexus repeating rubbish like this ignoring the Titan series:
https://hexus.net/tech/news/graphics...pus-tabulated/
Its hilarious,especially as you see Nvidia net margin starts to increase more and more.
Doubling of net margins or even tripling them means costs have not increased anywhere as much as people have thought for Nvidia and remember all the people referring to that conveniently leaked slide at 28nm from Nvidia which talked about 28nm. In hindsight it seem a little too convenient that it was leaked the moment they decided to change tiers.
Its probably why Intel wants to make PC graphics cards - they want high margins too,so what worries me is they would rather prod Nvidia and join them. Even AMD wants to join them,but the graphics side of AMD is so incompetent that they can't even make decent money when Nvidia has basically made $1200 cards seem acceptable.
The thing is I wonder how long the PC gaming market can bear this over the next 5 to 10 years,either it means things start slowing down as the mainstream as people have to wait longer and longer for performance to double or more people move to consoles,and keep their PCs longer and longer.
If you look at the small die under 04/14 series cards,the RRP has gone up from $250 for the GTX560TI to $500 with the GTX680 and between $599 to $699 for the GTX1080.
I wonder how long before Nvidia/AMD/Intel start GPU rental plans?? ;)
It looks like you can buy the 35W TDP GE series APUs:
https://www.quietpc.com/amd-2nd-gen-ryzen-cpus
Just more evidence that the 14nm LPP process was mobile/server oriented, really. The first Gen Ryzen desktop chips and discrete Vega cards were clocked to the very edge of the process capability. These pretty much hit the sweet spot.
EDIT: also good demonstration of the ridiculous performance/watt you can get out of power-optimised Vega, of course ;)
Having had a thought about 7NM,I just hope AMD making the Rome CPUs at TSMC,does not indicate a problem with the GF process for the desktop CPUs though. The noise is GF does not have capacity,but I do hope it isn't another case of GF saying one thing and doing another and AMD hedging their bets at TSMC instead.
I guess its pretty clear the arch was built for consoles and mobile, but was just about passable for gaming & HEDT.
You look in the AMD enthusiast forums/sections of Reddit and everyone seems to preach undervolting is the best way to get the most out of Vega. I'm sure CTF has highlighted this a few times also.
We know that Vega is not tied to HBM, so its a bit of a surprise we havent seen any lower end Vega cards IMO. You would think it might be a bit more competetive with what Nvidia has to offer in mid and lower mid spaces.
Its probably down to resources - AMD has prioritised CPUs over GPUs according to their own statements regarding Zen,so I suspect big Vega was developed for non-gaming markets first,and they just put a new badge on it to show they had something for gamers. After all the first big Vega based products announced were the Vega based Radeon Instinct based cards.
Yeah probably right, but tactically, If AMD release a 7nm mid range Vega type GPU with GDDR6 soon after Nvidia roll out all these untouchable high end GPU's (which seem like we might see bizarro levels of segmentation), they could at least retain what they have with the RX 580 segment of the market and force Nvidia to be a bit more aggressive.
I had an interesting thought regarding all this raytracing stuff. Navi was rumoured to be a MCM type setup with multiple identical chips,trying to appear as one GPU to software. Apparently this is hard to do and AMD said Navi was a normal GPU. However,what if Navi or a future AMD GPU line are normal monolithic GPUs but using IF to integrate a co-processor for things like ray tracing?? Hence two smaller chips on a MCM.
It probably won't happen,but I thought it would be an alternate way forwards,instead of large chips??
I'm pretty sure that at some point MCMs will become the way forward for GPUs, but AFAIK at the minute the extra latency of taking trips off-die is just too high to make it effective. OTOH having on-die IF linking modular logic blocks might make it easier for AMD to provide different levels of ray-tracing capability on different GPUs, if they can come up with a repeatable block that can be implemented multiple times...
I think, just as Jim said, this is inevitable.
I think a lot of the progress with MCM will happen on CPU before it makes its way to GPU. Threadripper is really just the start in a long path of advancements.
Suppose we could a parallel compute card that really isn't for gaming at all, but uses GPU-based-tech with MCM for better yeilds.
For data centre/AI/science that could be pretty useful.
Have people seen this:
https://sconedocs.github.io/microcode/
https://labs.vmware.com/flings/vmwar...river/comments
That is the microcode update for Intel CPUs under Linux for the Foreshadow vulnerability which points to this:
https://downloadmirror.intel.com/280...e-20180807.tgz
The latest microcode updates are listed here so its a valid link:
https://downloadcenter.intel.com/dow...-Data-File?v=t
Now if you download,look at the license,which says this:
https://paste.ubuntu.com/p/z2F3Cj6R8Q/
Wait,wut?? People can't benchmark the effect of the microcode update??Quote:
3. LICENSE RESTRICTIONS. All right, title and interest in and to the Software
and associated documentation are and will remain the exclusive property of
Intel and its licensors or suppliers. Unless expressly permitted under the
Agreement, You will not, and will not allow any third party to (i) use, copy,
distribute, sell or offer to sell the Software or associated documentation;
(ii) modify, adapt, enhance, disassemble, decompile, reverse engineer, change
or create derivative works from the Software except and only to the extent as
specifically required by mandatory applicable laws or any applicable third
party license terms accompanying the Software; (iii) use or make the Software
available for the use or benefit of third parties; or (iv) use the Software on
Your products other than those that include the Intel hardware product(s),
platform(s), or software identified in the Software; or (v) publish or provide
any Software benchmark or comparison test results. You acknowledge that an
essential basis of the bargain in this Agreement is that Intel grants You no
licenses or other rights including, but not limited to, patent, copyright,
trade secret, trademark, trade name, service mark or other intellectual
property licenses or rights with respect to the Software and associated
documentation, by implication, estoppel or otherwise, except for the licenses
expressly granted above. You acknowledge there are significant uses of the
Software in its original, unmodified and uncombined form. You may not remove
any copyright notices from the Software.
That seems an incredible abuse of power and display of corporate arrogance.
Actually, I think this deserves its own thread in as many forums as possible.
EDIT:
Doesn't this mean that logically Intel CPUs can no longer be benchmarked or reviewed at all from now on?
Because otherwise the details of any tests would read something like this:
Quote:
All benches were run with the latest patches, drivers, and updates except for the Intel CPUs where the performance impact of the latest security patches is top secret and we may not disclose their performance impact.
The comparison clause is usually included in beta software, I'm surprised if it's present in any release version. That said, I also haven't seen any such terms and conditions when applying updates to computers.
Read the other parts of the clause which says the US government is not allowed to benchmark,them of the fact to read the clause you need to download the software which means you are bound by the clause.
Well other parts of the clause have annoyed Debian:
https://www.theregister.co.uk/2018/0...patch_licence/
Also,I won't purport to be a Linux expert so do you want me to start a new thread here or do you want to??
Also a rumour AMD might launch APUs on 7NM by the end of the year:
https://videocardz.com/77660/exprevi...d-of-this-year
Also I have some concerns about 7NM too(think about AMD and the fact it is dual sourcing) - I do think it is quite possible AMD might be able to beat Intel to a proper rollout of parts on 7NM/10NM but clockspeeds might be the issue at least initially.
On mobile clockspeeds are a lesser issue,and also Intel has technically "beaten" AMD in the core count there as they have 6C and soon to be 8C APUs. Moving to 7NM allows AMD to add more cores,whilst keeping chip size and power consumption in check. Also the APUs can act as a 7NM CPU test.
They could run out a 2x CCX APU and a 4x CCX main die, which could then be scaled up to a 64 core 4 die MCM.
I'd personally find it really odd if the 4 core CCX only lasted one (and a half?) generation before they redefined the CCX again - particularly when the whole point of the CCX is that it's modular. Perhaps they'll decide that 4 cores is actually enough for an APU. After all, the current APUs aren't priced or specced to compete with i7s, or even i5s really. We might see a 1 CCX APU with more shaders, and maybe a 3 CCX CPU for up to 12 cores. 4C/8T APUs are still going to compare favourably to the i3s anyway (which are currently, and likely to remain, 4C/4T).
And then again maybe they'll just concentrate on using the transistor budget to improve clockspeeds, like they did with Vega 64. If they can pull a similar use of transistor count to bump the clock speeds of Zen 2 they won't need to increase core counts, because they won't be compromising on single-threaded performance anymore....
AMD PR has commented,Epyc and Vega are only going to be released this year on 7NM. Apparently AMD is releasing a new super efficient H series CPU.
They appear to have less cache,but the memory support is interesting.
"AMD Reference Platform, “Raven Ridge 2018” AMD Ryzen™ 7 2800H, 2x8GB DDR4-3200, Samsung VLV2560 SSD
Windows 10 x64 16299.64, Graphics Driver: 23.20.768.0, 1920×1080"
https://www8.hp.com/h20195/v2/GetPDF...A7-3217EEP.pdf
Having said that - look at the AT article on the 2990WX and look how much of the power budget Infinity Fabric is taking up. Now I suspected why they had it in a 1:2 ratio with memory frequency was down to power,but its surprising how much Infinity Fabric takes up.
I also looked at the TH article on Ryzen+ and cache latencies are as good as Intel now - so it is basically latency due to the inter-CCX communication and system memory.
So it could be quite possible a lot of improvements are to the Inifinity Fabric instead,instead of pure clockspeed,especially if they end up adding more cores.
Now this might have an impact on games,since if they can get latencies down,that would probably help a lot with games.
Plus this is GlobalFoundries we are talking about,there claims of 5GHZ clockspeeds need to be taken with a grain of salt,until proof is shown!!
Dev who is making their own raytracing based game:
https://twitter.com/SebAaltonen/stat...83494670577664
The RTX2070 can do 6 Gigarays/s in comparison. It even seems Pascal is not that bad either. The chap was a former senior rendering lead at Ubisoft.Quote:
Claybook ray-traces at 4.88 Gigarays/s on AMD Vega 64. Primary RT pass. Shadow rays are slightly slower. 1 GB volumetric scene. 4K runs at 60 fps. Runs even faster on my Titan X. And with temporal upsampling even mid tier cards render 4K at almost native quality.
Looks quite a bit like we are facing a gameworks/physx scenario again, where really all Nvidia does is package something up, kinda like a plug in for developers, making it marginally easy to develop their game rather than going the universal and potentially more difficult route.
I had a kind of disagreement with the Battalion 1944 developers in the early alpha days, and they started going on about how much easier it was to develop on Nvidia and how bad AMD cards are, claiming that if we're not developing a game there is no way we can disagree with them...
See CB have punished done leaked prices for i9-9900K:
https://www.computerbase.de/2018-08/...-i7-9700-euro/
€560!
So looks like the 8C/8T i7 will slot in near the old 8700K price, will the new LGA 115x flagship will be around 25% more.
Maybe they're soldering with gold and platinum!
https://www.crn.com/news/components-...in-on-7nm-cpus
Quote:
AMD CTO: 'We Went All In' On 7nm CPUs