Read more.Referred to as "the world's most powerful AI system," it leverages 16x Tesla V100 GPUs.
Read more.Referred to as "the world's most powerful AI system," it leverages 16x Tesla V100 GPUs.
how can you hook this system to a laptop and use it for Blender or 3DSMAX renderings?
As I said in an OC3D thread, I love how they're calling this the largest "GPU". It's like me calling a rack mount multi-drive storage server the largest "HDD".
I guess it's whether people perceive the whole system is a "Graphics Processing Unit" or a system comprised of multiple "Graphics Processing Units" under a single management core?
Those interconnects look like Intels Monolithic die interconnects?
On a slight tangent, I really don't like those Leather jackets he wears, I don't really feel they suit him xD
But will it mine?
Old puter - still good enuff till I save some pennies!
how does this compete against AMD project 47?
Wake me when you put 2 socs in a new Shield TV unit and run them at ~100w to compete with consoles better. I'm waiting with cash, but I refuse to buy current models with years old tech for gaming. I want more or I won't buy as a roku works already for movies/shows. You would sell more if you updated the soc at least so gamers get more bang. Surely they could hack off the deep learning part and make a go of it. Not sure why NV appears to have given up when it pushes the platform just like cuda (took 8yrs to really get into full swing). But can't blame them for aiming at cars, deep learning etc, just can't understand why 10mil (tapeout for a soc roughly last I checked) is too much to update a console you sell for years especially given how weak current consoles are compared to PC's. At 10nm this soc should be pretty good in mobile even strapping on a modem from Intel/Qcom. It wasn't worth it before android gaming took off, but now android gaming is pretty darn good, making the gpu part worth buying finally. Clearly from the specs here, it's a power sipper not a hog.
A few interesting videos have been uploaded by Nvidia since this story was published:
Hmm. Not sure if that's an apt comparison. I think a rack with several multi-socket multi-core blades being seen as a CPU is more apt. The point they're making is each card, with its multiple cores, has no downside to being separated across several nodes, and is just seen as 1 giant mass of cores, rather than board count(x)core count
The drive comparison especially doesn't work because speaking strictly from a performance perspective, storage benefits greatly from striping, as it's typically a very basic well known few set of actions; read, and write.
So ya, I'd go with the CPU comparison
It is kinda amazing if he's even slightly on the mark with his point; slicing the performance penalty for splitting the load is no mean feat, generally speaking. If they've cracked it, it'll be a great boost to the folks who know how to put this stuff to use.
Because I don't have the bloody foggiest haha.
There are currently 1 users browsing this thread. (0 members and 1 guests)