Read more.Over 7,000 Tesla GPUs combine in record-breaking Chinese number-cruncher.
Read more.Over 7,000 Tesla GPUs combine in record-breaking Chinese number-cruncher.
That's a lot of folding power.....
Wonder what size PSU they use? Chernobyl?
Join the HEXUS Folding @ home team
GPUs - average 128 processing cores
CPUs - average 4 cores
How long before we see GPUs taking over more traditional CPU processing in PCs
(yes I know we have CUDA for some stuff - but its very niche)
From the little I know about GPU architecture, the performance improvement of using stream processors vs. regular CPUs really depends on the data you are processing, how you are processing it and how the code to do this task is written. Some operations simply wouldn't work on GPUs, others would be incredibly inefficient, but then you have tasks that benefit from spectacular performance improvements by using GPUs.
Supercomputers are usually given taks that require processing a mass of data in a particular way. Give it the taks, walk away for X amunt of time and return to find your results. Just like rendering a 3D movie really. This is why GPU based systems are far more efficient, as that is what stream processors are designed to do.
Also, bare in mind that it may use 7,168 Tesla M2050 GPUs to do the heavy lifting, but the system still requires "14,336 unnamed multi-core CPUs" to manage that data, run the OS, manage coms between nodes, etc.
Personally I would be more impressed if they had 10,000 GPUs and 1,000 CPUs in something like this
Chalk this one up to massive marketing spin for nVidia
Last edited by Funkstar; 28-10-2010 at 11:10 AM.
The lab also doubles up as a heat source for the city.
Okay, a practical note.
The Tesla M2050 is an underclocked GeForce GT 470, with moar RAM. They're still constrained by the same limits as a GeForce - i.e. they're a dual-slot GPU, so they're forced to use 2U servers with room for the cards. And if you have a 2U server, you'd be pretty dumb not to make use of all that room with CPUs (given you can easily get 4 CPUs into 1U of space). They might be freaky half-width servers, i.e. 2U tall but with two servers in the chassis, to allow equivalent to 1 server per U whilst still accepting dual-slot cards
The best packing ratio you could achieve, conceptually, is 8 GPUs to 1 CPU, in 3U of space:
* 1U server with a single CPU and two PCIe slots
* 2x Tesla C2050, in 4:1 configuration (i.e. a 4-GPU Tesla C2050 serving all four GPUs to a single host card, rather than the usual 2:2 config)
I wonder what kind of points per day that thing could spit out while folding.
But... can it run crysis? (Yeah I thought I'd keep the joke alive)
Damn it jack i read the whole article just to reply that
But theoretically (although I don't know the performance hit for using WINE), you have 2.5PFLOPS at your disposal which can be roughly compared to the 0.907TFLOPS of a single GTX460. Round that up to 1TFLOP to account for emulation losses, giving you 2,500x the PPD of a GTX460, although you'd need a decent internet connection so as not to be bottlenecked by continuous uploading/downloading of units with many seconds of downtime waiting for WUs. The PPD being roughly 9.5k as a rough estimate, you'd be looking at 23,750kPPD (23,750,000PPD) - or over 20% of all the folders in the world combined.
So it's not exactly going to cure all diseases in days, but it would be a massive boost.
There are currently 1 users browsing this thread. (0 members and 1 guests)