Read more.Source adds that the Zen 4 server CPU will have 12 channel DDR5 support, 320W TDP.
Read more.Source adds that the Zen 4 server CPU will have 12 channel DDR5 support, 320W TDP.
Daum
Ah, this explains why Intel is finally allowing AVX-512 in desktop parts with Rocket Lake. It's unlikely AMD is fusing it off for desktop parts.
I just don't understand why we need several iterations of vector extensions instead of implementing variable-length vectors, Cray style. Less silicon area wasted, less issues with hot spots etc. The trouble with not owning the instruction set is that AMD has to repeat Intel's poor engineering practices to remain attractive.
This is just absolutely nuts, especially the 160 gen 5.0 lanes for 2P systems...that's over 600GB/S (or over 5tbps) of PCIe bandwidth.96-cores (192 threads)
12-channel DDR5-5200
128 PCIe gen 5.0 lanes (160 for 2P systems)
320W TDP (configurable TDP to 400W)
Also, the power consumption is crazy, the Zen 3 were 64/128 at 280W giving around 4.5W per core whereas these new Genoa chips will be 320W over 96/192 which is around 3.4W per core! Even upping to the cTDP of 400W that's still 4.2W per core. That 5nm die shrink is working some serious magic.
On top of that, DDR5 at those speeds is absolutely needed, on a lighter touch when linus encountered the storage speed issue he explained quite well that the total PCIe bandwidth was almost all of the DDR4 bandwidth of the CPU.
Oh man, I can't wait to see these announced, demoed then tested, these are Ker-razy!
The only thing left for Intel after AMD adds AVX3 and BFLOAT16 support is NVDIMM support, hopefully AMD will be bringing something like that in.
Anand's 3990x review measured 3.0w/core at maximum load: 200w for the cores and 80w for the IO and IF, so there's an obvious place to make real efficiency gains without touching the CCXs. DDR5 might be lower power, PCIe5 probably isn't. Lots of variables at work.
Hard to say how much of an net efficiency gain there is without the rated frequencies.
Tabbykatze (01-03-2021)
hubber hubber...far too costly for most of us though.
It does make you think what could happen with next gen ryzen, could we potentially end up getting more than 16c/32t at the top end, ddr5 already gives the potential to double the ram capacity and adds ecc....
I wouldn't be shocked to see ryzen and threadripper merge into one platform.
Maybe this lack of stock is a good thing because the next version looks like it might be taking things up another notch (assuming next ryzen supports ddr5 etc)
In fairness, these parts are designed for servers/data centers... not consumers.
Although, if AMD follows similar to Zen 2, then they will probably make a Threadripper version of this exact chip as well... and you can bet it will be expensive (definitely on par or MORE expensive than 64c TR).
I think for Warhol (Zen3+) which we will see this year will continue with same core counts as Zen 2 and Zen 3 (and possibly on AM5 - unless AMD decides to keep it at existing AM4 - which wouldn't be too bad).
Since 5nm node allows for 80% higher density vs 7nm, and assuming 8 core chiplet design will be retained... then I think the top end consumer CPU to replace 16c/32th will be 24c/48th CPU.
Not seen this much tech upgrades and more since AMD got back on the line, and it also show that companies like intel has been holding back onb purpose, which makes me a bit angry... but oh well, we will see what we get next year same time, hopefully can have next super build on DDR5
I am on threadripper now, that will stop in next upgrade, in general a unsustainable approach for mortal PC users, today even more power users should still consider of TR are really worth it.
My response was merely from a technical standpoint as i remember all the years with just 1 core, and when they said 90 nm and i was nearly blown off the chair.
While going beyond 16c/32t might happen, they are not going to rush towards doing so if the net result was to harm the margins of their lower-end threadrippers in the process and similarly, merging the platform seems very unlikely for similar reasons that it wouldn't actually make them more money.
Noctua currently are ripping their hair out
Insane platform.
BEST explanation of AVX512 ever: It should be noted that Graphics Cards employ this SIMD scheme except on STEROIDS. NVidia cards have been processing things 32-at-a-time for years now, while AMD cards process at 64-at-a-time. So AVX512 is still "catching up" in some respects to what GPU hardware can do.
Still, its way easier to program for a CPU only rather than transferring CPU data to the GPU and trying to coordinate two different machines, with two different coding structures, at the same time. So the AVX512 feature is definitely very welcome.
This is just absolutely off the chart
Erm, how many pins?!?
AVX-512 has no practical use on desktops. In fact, it has an extremely limited uses in general seeing how its nowhere near in widespread use. Its rarely (if ever) actually used.
Its also a power hog.
Pardon?I just don't understand why we need several iterations of vector extensions instead of implementing variable-length vectors, Cray style. Less silicon area wasted, less issues with hot spots etc. The trouble with not owning the instruction set is that AMD has to repeat Intel's poor engineering practices to remain attractive.
I was under the impression that AMD invented x86/x64 and has patents on techniques used in AM64 which have to be licensed from AMD... and that actually seems to be the case.
Speaking of Cray, have you seen this:
https://www.theverge.com/2019/5/7/18535078/worlds-fastest-exascale-supercomputer-frontier-amd-cray-doe-oak-ridge-national-laboratory
Also, this might help
https://www.linleygroup.com/mpr/article.php?id=11753
GPU's seem to have those capabilities, but most software isn't written to take advantage of hw acceleration and requires older methods.
Aka, it could be a software stagnation issue.
Just look at how long it takes devs to incorporate support for AMD gpu's in terms of hw acceleration in pro software, despite the fact that Open CL and other open standards work just as good (or better) than CUDA.
Pleiades (03-03-2021)
There are currently 1 users browsing this thread. (0 members and 1 guests)