Read more.2nd Gen Threadripper, now starting at $649.
Read more.2nd Gen Threadripper, now starting at $649.
It would be really interesting to see how the 2990wx and 2970wx compare to their EPYC counterparts in multi-threaded workloads. EPYC has the advantage of twice the memory channels and all cores having direct local memory access, where Threadripper has the clock speed advantages.
Enough with the TDP derived Bang4Watt scores...they're just so detached from reality as to be worthless. You can see on the Blender system-wide power consumption that there's a 61W (44%) difference between the 8700k and 9900k despite the same 95W TDP. The new Threadrippers with some cores disabled appear to save ~ 5 - 10% in power draw compared to their full-fat counterparts yet have the same official TDP.
If it's not possible to isolate CPU power usage while running a benchmark just use the measured system-wide power consumption. That's what end users will be paying their electricity company for anyway.
Corky34 (30-10-2018),edmundhonda (29-10-2018),Iota (29-10-2018)
Hi, I agree with Lanky123, the Bang4Watt score is useless, as all it takes is to see the actual power consumption and things start to be different. Intel is using legal hole, to show his CPU at 95W, though for most of the time, when taxed, it will go over it a lot.
So for the people who actually care the Bang4Watt, they would likely use the real measurements and not label numbers.
And so-so gaming? These machines most of the time have powerful GPUs and 4K screen attached to them. Gaming is on the par,but they beat Intel in production. So...
The more you live, less you die. More you play, more you die. Isn't it great.
Actually, I think Anandtech have a better approach looking at the power usage - https://www.anandtech.com/show/13516...2970wx-2920x/2
When you look at something like the i9 9900k using 168W at full load, when it is meant to be a 95W part, it definitely shows up the difference in how Intel and AMD define the TDP. AMD is closer to reality it seems.
Does not look like my earlier attempt at comment made it.
Be interesting to see results with Processor affinity set to exclude Core Zero as this has seen performance uplifts in some area's per the article issues with core/thread utilisation issues.
As a Haswell-E user, I'm very interested in this, but especially the 2920X (lets not get silly with the costs guys, over three hundred quid for a CPU is already crazy). Up until the gaming tests, the 2920X looked like a no-brainer, but the difference in gaming is really quite marked. I find that surprising and disappointing.
1) There's no Haswell-E in this line up. Haswell -> Skylake was (iirc) around a 15% IPC increase, so the HEDT Intels on display have both IPC and clock speed advantages over your existing rig.
2) The gaming difference is only "quite marked" at 1080p with a very high end GPU, and even then in two of the games tested the 2920X is pushing out minimum framerates over 60fps (and in the other no CPU is getting over 50fps minimum). So the question becomes what games, what resolution and what GPU? Because at 4k with a 1080 Ti, there is no meaningful difference in gaming performance...
^^This^^ x100.
Both companies calculate TDP differently so as to make using it for a metric completely useless, either do it via system-wide power consumption, don't do it at all, or attempt to account for all the variables between the two different ways of calculating TDP.
I have to say that very few products are named according to their ability... but the Threadripper does exactly what it says on the tin.
It rips threads.
End of
Originally Posted by Advice Trinity by Knoxville
"Would you really be pairing a high-end desktop CPU and GeForce GTX 1080 Ti with gaming at 1080p? Unlikely."
$699 cpu, for a 1080ti...YES, for sure as 4k means nothing to me for years unless they pop out a bunch of 16:10 stuff soon. I can wait and the fps you get with everything turned on in EVERY game means perf sucks. It's against my religion to NOT play a game exactly the way the devs wanted me to see it. Turning stuff down is dumb.
As for the chips above this, I have no use for that many cores, currently not even work stuff at that price but not saying AMD shouldn't be charging that much (charge as high as you can get!).
Any chance you guys can try running y-cruncher on those CPUs? Intel has a factor of two at the top of the table over the AMD chips (and unlike PiFast this seems to be decent up to date code so means something) but their home page only has old thread rippers on it.
http://www.numberworld.org/y-crunche...charts/1b.html
While a fair argument, it still shows a CPU bottleneck. I'd imagine more gamers tend to upgrade their gpu than their cpu, particularly for intel where you need a new mobo and also ram if your system is particularly old. Another factor is that some people seem to swear by the 144 Hz monitors over 4k so going AMD will cap your use you can get out of it. Personally I'm hoping AMD can compete more in that area next year and then I can decide to finally take the plunge.
As someone who swears by their 144Hz 1440p monitor I have to say it has given my old fx8350 an expended life as I can now tolerate low frame rate so much more than with my old 60Hz monitor. So sorry, that doesn't help your argument. If there is a bottleneck, then you can see it in the 4K minimum frame rates as those actually matter now rather than giving some mythical guess about the future.
Telling someone with a 4K monitor that they have a 1080p bottleneck is like telling me I am too fat to use the kitchen door because I can't get through the cat flap.
CAT-THE-FIFTH (04-11-2018)
There are currently 1 users browsing this thread. (0 members and 1 guests)