Read more.Promises to use Zen architecture to "return innovation and choice to the datacentre."
Read more.Promises to use Zen architecture to "return innovation and choice to the datacentre."
Given these are supposed to contain 4 dies per CPU, we'll have to see how good the Infinity Fabric between them really is compared to Intel's QPI / and ring bus based uncore topology. I can see these being great for embarrassingly parallel workloads but might struggle if you have to share data via L3 cache (which is already split between each die's core complex) and LLC but we'll see
Back of the envelope calculation comparing unidirectional bandwidth:
Infinity Fabric: 64 lanes of PCIe 3.0 = 48.4 GB/s typical (64GB/s theoretical)
E5 2699V4 QPI: 2 x 9.6 GT/s = 38.4 GB/s
Looks good, makes me want to know more about inter die topology: Will each get full access to the infinity fabric or will the 64 lanes be partitioned between them?
Last edited by een4dja; 08-03-2017 at 01:47 PM.
If AMD can get some SC and server wins for Naples,they can also sell AMD FirePro cards as part of the package - which means a win/win scenario.
This is what we need, keep them pressure AMD!
This will be where AMD must put the skeptics to rest. Godspeed.
.....Any chance you'll get one, Hexus?
I hope there's a 'desktop' version with 16 cores 32 threads with a clock of say 3-3.5Ghz for under 1k and support for 8 dimms... I can dream can't I...
They used to do an opteron on the desktop so I can hope that they do the same with these
The old desktop Opterons were actually the same silicon as the consumer desktop chips, using the same socket. AMD hasn't made a prosumer/workstation grade platform available in the channel since the s940 days, and even then it was only until they tweaked the platform to make it work without registered memory in s939 form. Since then a whole variety of Opteron processors have been released on socket AM[2|3](+), but the only difference between them and the equivalent Athlon/Phenom/FX CPUs were the clock speeds and TDPs.
That said, there's currently no real comparison for Naples - AMD's Opteron line consists of single die AM3+ CPUs, single die C32 CPUs (basically the same as the AM3+ but supporting more memory and up to 2 CPUs per system), and 2 die G34 CPUs supporting up to 4 sockets.
It looks like they plan to service that high end G34 market with an up-to 2 socket platform (increasing the dies/cores per CPU but reducing the max number of sockets available on the platform) - so G34 goes up to 16C per socket for 64C per system, while Naples is 32C/64T per socket for 64C/128T per system.
That means they might release a replacement for C32 with 2die MCMs, and potentially only 1 socket per system (or they might remain at up to 2 sockets). Whether they'll make it to the channel for prosumer/workstation builds is another matter entirely...
From what i gathered reading this WCCFTech article each Naples has 128 PCI Express Gen 3.0 lanes however when configured in a 2U server 64 of those lanes are used for inter processor communication.
I think een4dja was asking whether each die gets full access to the infinity fabric
Since the CPUs have 128 PCIe lanes in total, each die must have 32. My guess would be that in 2P configurations the inter-CPU link is made up of 16 PCIe lanes from each die, so each die has an exclusive x16 connection to the fabric, which are then aggregated to form the x64 link between the sockets...
My mistake, i confused him/her saying 64 lanes as meaning the link between the sockets.
EDIT: Question: would or is it even using infinity fabric for communication between die's, or rather is it only using that?
Last edited by Corky34; 08-03-2017 at 04:45 PM. Reason: Wanted to pose a question.
AMD has to succeed. Why? Because competition is needed to push companies into making better products. Intel has been resting on its laurels and really porking the customer in terms of its high prices. Maybe this will get Intel's attention.
...128 lanes ? Wow.
Ill preorder Bologna/Rome version.
Well, individual dies already have "infinity fabric" for communication between the CCXes. It would appear that there's some method for connecting the fabrics in each die without reducing the number of PCIe lanes available (and of course Naples seems to have more available PCIe lanes on each die than we've seen from consumer AM4 processors anyway), but until it launches we won't know what.
Whatever the layout is, there must be some way of connecting the 4 dies together through "infinity fabric" in such a way that the can communicate with each other and still have 32 lanes of PCIe 3.0 to the outside world. Then, if you use 2 Naples processors in a system together, half of those PCIe lanes get bound into the "Infinity Fabric" that connects the CPUs together, and to me the only sensible way to do that would be to use 16 lanes from each die - that would mean each die connects to all the other dies on its own CPU and directly to the other CPU.
Clearly "Infinity Fabric" isn't a single connection type, though - it runs within a die and between physical CPUs, and I'm assuming it also runs between dies in an MCM CPU. That would make it more like a low level protocol: think something like TCP/IP, which describes how messages move around, vs. LAN, modems and wifi, which are different physical carriers than can transmit TCP/IP packets...
Corky34 (09-03-2017)
I've always dismissed AMD, but I will definitely have to start rethinking that. Watching to see how this turns out.
There are currently 1 users browsing this thread. (0 members and 1 guests)