Read more.Quote:
Le roi est mort, vive le roi.
Printable View
Read more.Quote:
Le roi est mort, vive le roi.
No reason to buy this over 1950X. 1950X can also OC, although not quite as high as this and really the performance delta is not so big. Also, the power consumption is quite similar, which is surprising given Intel technological advantage.
But I wanted a Core i11 ;)
This thing is ridiculously expensive for what it is. Who is going to use it? The only usage case I can see where this might legitimately be worth the money is professional game streamers. Otherwise you are better off getting Threadripper - or even Epyc, since the 24C48T sku was meant to be somewhere around this price point.
Intel needs to get Coffee Lake out the door asap; at this point I can only think of three or four scenarios where an Intel processor makes more sense than Ryzen.
...or Ryzen, with a little caution thrown in! http://www.hardwarecanucks.com/forum...deep-dive.html
you get this then 1 month latter AMD gives 24 cores for $1999 on the same TR4 socket, who will be the i-SHEEP?
Literally no gamer needs this. There is no gaming workload that requires it, this is purely for workstation loads. Hell the 10 core thread could quite happily game, stream, AND encode the streams at the same time ha.
This is basically a lower priced Xeon. I'd be interested to see how much of a dent it puts in the Xeon bottom line, as I know a ton of people switching over to i9s from double-socket xeon systems of only a few years past, and saving themselves several grand in the process.
I might be wrong here, but the review states that 4.5ghz on 'Turbo 3.0' is only available to two cores on the 7980xe - and yet in the 'conclusion' it seems to suggest that 4.5ghz is available on ALL cores. Am i reading it wrong?
"the Intel Core i9-7980XE is the fastest consumer processor ever launched. A dose of overclocking makes it untouchable for heavily-threaded applications."
Except by a 24 core epyc for $1050~, which proves much larger point. When fabric also soon gets around to multi gpu, neither intel or nvidia have anywhere to hide from amd's "modular and scalable" onslaught.
That's what Hexus got all cores running at when they overclocked the chip.
I too was surprised that the power draw is a lot closer between AMD and Intel then I thought possible. I would like to think that it's because AMD did a good job rather than Intel doing a bad job.
.....complex mesh architecture....does this sound like 'gluing' processors? infinity fabric is not a bottleneck after all.
Unless live capture encoding in x.264 (or maybe even x.265) is a lot less multithreaded than I think it is, good quality live capture video encoding would benefit greatly from this. People at the moment are a bit stuck, either using GPU capture (which washes away a lot of detail) or encoding somewhere on the fast - ultrafast spectrum. This i9 would enable both better image quality and smaller files - which if Youtube gaming and similar is your job does make a big difference.
Of course what else you'd do with it is a very good question. Threadripper gives multithreaded performance that gets you very close for nearly half the price, and if money truly is no object in the quest for a superfast workstation then why not splash out and get the 24C Epyc complete with 8-channel RAM?
AMD can't give you 24 cores on socket TR4 - at least not in this generation. The infrastructure and silicon isn't there to support it.
OTOH, they can already give you 32 cores (64 threads) in a single socket for around the same price. Motherboard support is still sadly hard to find, but the processors are there and ready to roll....
price vs performance is still a huge issue... maybe they just have an extreme horrible yield at Intel or trying to milk the world or whatever....
Would rather buy 2 16 core Ryzen CPUs and build 2 systems at current point unfortunately, besides with everything else in the systems adding up you would barely feel a difference anyway.
For streaming any CPU that isn't a bottleneck at 60fps is fine, because what streaming sites can send higher than 60fps content to viewers?
I'd be interested in how the 7401P compares, and I could see it getting a lot of the business customers that wanted cheaper xeons - system cost shouldn't be too far off this i9 (£600 cheaper CPU, but can't exploit the "cheap" X299 platform), and it does wide&slow better
Yeah, I'm surprised at just how much cheaper the P variants (single socket systems only) are compared to the normal ones: the 2P capable version of the same processor will set you back more than half as much again. Given that the 2P systems basically just grab half the PCIe lanes for infinity fabric, I'd be surprised if there's that much difference between them...
I'm actually even more interested in the 7281 - a 16 core EPYC for less than £700. You get the full platform benefits - 8-cxhannel memory, 128 PCIe lanes, etc. but in a mainstream workstation price bracket.
And let's remember that's a 2P capable chip, so you can slap two of those in a supporting motherboard for 32-core goodness at around £1300 of CPU cost.... :O_o1:
EDIT2: turns out ballicom are listing a dual socket SP3 motherboard for £650. So for the cost of *just* the i9 7980XE, you can buy two 16-core EPYC processors and a supporting motherboard....
"First off, the chip keeps to a 3.4GHz frequency under load. Second, HandBrake doesn't scale that well past 10 cores; we know this by looking at how the CPU behaves in Task Manager."
This is true both for Intel and AMD platforms. I sense some partial behaviour here...
The issue isn't the gaming framerate but the encoding framerate. Last time I did any semi-professional encoding a 4C8T 4Ghz i7 managed around 3fps at high quality and 0.7fps at max quality without GPU acceleration. Things have moved on a lot since then but doubling IPC and quadrupling the core count would still only yield 24fps. There's plenty of space yet for more cores and scaling for encoding workloads. Not everyone's satisfied with the high-speed fixed function encoders in modern GPUs.
So the programs that are slow all basically draw a frame at a time instead of part of one. That is always going to be a function of how fast you can encode and decode.
The ones that faster on multi cores that are slower use binary space partitioning or rendering buckets to divide the screen up into smaller pieces. To be blunt the ones using bsp partitioning would actually get better numbers with a faster video card at a higher resolution. AMD proved that years ago.
The way that works is smallest part of the screen that can be drawn without effecting what the piece next to it is calculated on the first thread, the even partition touching is skipped the next one not adjacent is started on a new thread. I tend to force 1/4 or 1/8th bsp or rendering buckets in engines that support it to get better quality than the mkg expect to support because if your bsp over heat the memory by saturating the bandwidth non ecc memory has not way to check to see if their are errors due to over heating. The point is the chip is getting the results that is expected by tech artists and getting better results in games is easy but might not be worth voiding your warranty on the video cards.
You can also force what is called fractal divisors in the hero engine based games to treat the screen as narrow blocks that go from top to the bottom to get ahead of screen tearing by using an odd numbered bsp forcing the engine to use narrow rectangles instead of square blocks.
The down side you can melt the memory on most video cards doing that. The card simply passes what it thinks is a complete screen frame to the card memory and swaps it out the monitor at a rate of about eight to one for what it would normally take to render the whole screen by using multiples of four cores and unused gpu shader cores. Basically it is sorta an exploit of the sli code but it does not require drivers only forcing the software to use bsp or rendering buckets instead of whole frames. In theory if the memory was not shared you could link a both a thread ripper and 18 core to cards and have them use the sli code to render to bsp in any game that is built into most gaming engines, to increase the quality you see on the screen until the memory on the video cards melt.
As scaryjim pointed out, there's no contest for bang-for-buck in parallel workloads between AMD and intel - an epyc system can throw far more cores at the problem than the i9, and much of the epyc product stack is cheaper. The only benefit to the intel CPU is a ~10% boost to framerate in some CPU-limited games, and you only hit that bottleneck when the game framerate is well in excess of what streaming games needs. For the same price as this system you could have a 17/1800 ryzen to play the games, and a dedicated capture box with an epyc CPU to encode the output. Many streamers use a separate computer to record, to save the load on the gaming system, so turning the recording computer into the recording&encoding system wouldn't change much.