is AMD working on a product similar to NvLink?
is AMD working on a product similar to NvLink?
Yeah it's a compute card, bound to end up as a Titan though isn't it.
I think the 10x performance claim was for neural network training, where the 16 bit FP support should make it fly.
Even when demos are given behind closed doors, people can't report on specifics but they say it happened so I guess no working silicon has been demoed yet.
High speed interconnect is nothing new or special for AMD, and the headline speeds HT is faster than NVlink. Having said that, I'm sure I read somewhere that HyperTransport was a bit dated and AMD were looking to do better, but I can't find the reference.
Hypertransport 3.1 on a single link usually can do 12.8GB/sec per direction, unless you go for the full width of 32 bits which allows 25.6GB/sec in each direction.
As per usual I haven't seen any proper specs on NvLink, but Nvidia claim 20GB/sec on a link. That is one direction, so you might see the number doubled for the full cross sectional bandwidth. Nvidia talk about running 4 links together for an aggregate 80GB/sec. Really, the slides look rather like a "look how we can connect these Athlons together in a grid" slides from 15 years ago, just with GPUs and more up to date speeds.
Anyone seen how many lanes NvLink uses? As the more modern design I presume it is 16 per direction, but facts seem thin on the ground.
Note also that this is kind of prototype first outing for Nvlink, and PCIe 4 at 32GB/sec is not far behind with release next year.
I get the feeling though that the diagram you/they showed is like some of their other claims, that it will eventually come with a PCIe connection (that it's capable of using PCIe) but the first cards are only going to come with NVLink, i get the feeling that they're fabricating (and initially selling) large silicon and binning ones that don't meet make the grade for a lesser (lower power draw) HPC card that will use PCIe.
What their doing with HBM2 is using it on a HPC class card, comparing HPC cards with a gaming cards is like comparing apples with oranges.
JEDEC doesn't (afaik) specify a operating speed or throughput, they just say *up to*, i could be wrong but i thought JEDEC specifications were more about setting standards for things like voltage, prefetch, and things that are more to do with *how* RAM works.
Last edited by Corky34; 07-04-2016 at 10:04 AM. Reason: replying to Jace
The card must have PCIe support. The NvLink will only work if you are plugging the cards into a supporting IBM Power CPU based platform, and much though I like the IBM Power based machines there will be workloads that are less dependant on the host CPU where a bottom of the range Xeon running a couple of cards over PCIe makes more sense.
Edit to add: Is it just me that sees "P100" and think "Well it was better than the Pentium P60, but the P100 was still rubbish"![]()
I'm not saying it's never going to support PCIe, I'm saying that presently the P100 card itself doesn't, at least that what AnandTech seems to indicate, my guess is their binning Pascal and the high end goes into the P100 and lesser ones will probably go into a lower power, PCIe card.
EDIT: In the AnandTech article they have a picture of the NVLink hybrid cube mesh and *that* connects to the CPU via PCIe, but the cards themselves don't have a PCIe connection.
Last edited by Corky34; 07-04-2016 at 10:17 AM.
No, that Anandtech article shows all 8 P100s connected to the Xeon CPUs by PCIe links going through PCIe switches:
The black lines are clearly labelled PCIe. If they didn't have PCIe they couldn't communicate with the CPU/rest of system so wouldn't be able to do anything.
Just depends what you call a card really. The diagrams show that you can have two P100 chips per PCI-E connection, much the same as Radeon Pro Duo and previous Tesla cards. How you lay out those interface/chips is quite flexible it seems, but it's still primarily communicating with the rest of the system via PCI-E.
I'm sure the option is available to use either, depending on the target system. Most systems will want to use PCIe, but those Dept of Energy supercomputers will use an awful lot of GPUs so they may well use nothing but NVlink. Those probably aren't fully designed yet, so will also want to keep some flexibility.
I expect it would have a PCIe connector somewhere, even if it doesn't look like a conventional one.
The card would be what the chip is mounted on, it's the printed circuit board that can be inserted into an electrical connector, the P100 does *not* come with a electrical connector for PCIe, that's not to say it has no PCIe protocols, circuits, or PCIe related hardware in the chip or on the P100's printed circuit board, in fact going on the Anandtech article the two electrical connectors on the backside of the card are split, half uses the PCIe protocol and half NVLink.
At a guess Nvidia will release a P50 later in the year made from the Pascal silicon that didn't make the cut, that's when we'll see a HPC card that uses a PCIe electrical connector.
So it doesn't have an electrical connector for PCIe, but half of it's electrical connector is for PCIe?
It doesn't use a conventional PCIe edge/slot connection that conforms with one of the published PCIe standards for connectors. It *does* have an electrical connector for PCIe. It's a non-standard, proprietary connection, but it has one.
You can't plug it into a standard motherboard PCIe socket, but that's fine and no-one is going to care about that. That card you pictured understands PCIe as a protocol, the Anandtech article states "Though the GP100 GPU at the heart of the P100 supports traditional PCI Express".
Edit to add: I should learn to type faster![]()
No it doesn't have a PCIe electrical connection, there's no PCIe electrical interface, the electrical connections/interfaces are both NVLink as shown in the backside shot of the card...
It's totally possible that one of the two NVLink connectors, theoretically, could be rewired to standard PCIe connector as going on the Anandtech article half of those two NVLink connectors are using the PCIe protocol and the other half uses the NVLink protocol, but both use the NVLink *connector*, that's why i said i expect them to come out with something like a P50, a cut down version of the P100, that will drop the NVLink connectors and rewire one of them to a PCIe connector.
The electrical connector/interface is the edge/slot connector, the electrical connector/interface is an electro-mechanical device for joining electrical circuits as an interface using a mechanical assembly.![]()
Those WiFi little cards that you get in laptops are PCIe. The modern SSDs that come on a small card and don't look anything like a hard disk, those can be PCIe. Heck, even a USB C connector can carry PCIe. Some SATA ports and network ports on motherboards use PCIe without ever even going over a connector.
Those two big connectors you show in that image, one of them has PCIe signals on it.
Edit to add: In fact, one of those connectors is supposed to be power and PCIe, the other has four NvLink connections on it.
I think you are confusing PCIe which is an electrical signalling and logical configuration standard with these edge connectors which are just an option:
We aren't talking about edge connectors.
Last edited by DanceswithUnix; 07-04-2016 at 02:46 PM.
There are currently 1 users browsing this thread. (0 members and 1 guests)