anyone fancy running one of these
http://www.bbc.co.uk/news/technology-15758057
now if it doesn't bottleneck my 6870 and comes in a 775 package it could be a winner
anyone fancy running one of these
http://www.bbc.co.uk/news/technology-15758057
now if it doesn't bottleneck my 6870 and comes in a 775 package it could be a winner
Looks like Intel has been canvassing the media sucessfully!!
Its a co-processor like the Tesla cards and their AMD equivalents. It is NOT a CPU. It is based on Larabee IIRC. The chip AFAIK even makes the 40NM Nvidia Fermi based GPUs look small despite being 22NM. On top of this the APIs which this uses will determine its success too and will also determine its real world performance (not theoretical).
The chip is still a prototype and older chips like the Nvidia GF110 GPU(GTX580) used in the Tesla M2050 already can produce 665GFLOPs DP. The RV870 GPU(HD5870) used in the AMD FirePro 3D V9800 already can do 544GLFOPs DP and the card was released last year. These are based on the older TSMC 40NM process too.
The next generation 28NM Nvidia GPUs and even the next AMD GPUs will probably exceed 1TF. These are both based on new architectures which will improve DP performance significantly.
Last edited by CAT-THE-FIFTH; 16-11-2011 at 10:46 PM.
EvanJackPenn (17-11-2011)
It's x86, it hardly needs fancy gubbins. Throw in Platform Computing support and it's almost plug and play in many datacentres:
http://www.xbitlabs.com/news/other/d...elerators.html
We've got a cluster of terabyte RAM machines at work that I'd love to see paired up with a MIC.
Whereas these do.The next generation 28NM Nvidia GPUs and even the next AMD GPUs will probably exceed 1TF. These are both based on new architectures which will improve DP performance significantly.
Return of the x87 chip?!?!?!?
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Yawn. Again,if it was so easy Intel would have launched 500GFLOP to 600GFLOP versions last year using 32NM production tech and it would have been competitive. You don't even know if it the 1TF figure is actually a sustainable figure or whether under real-world conditions it will be anywhere near that. Power consumption is another factor too as the prototypes chip from what I gather are huge. Until it has actually hit production you have no clue. Both the AMD and Nvidia are known quantities and the Nvidia units are widely used.
However, I have been following Knights Ferry and Knights Corner for a while. ATM despite Intel's bragging it will be the best thing since slice bread it seems not a huge number of companies are interested in it even now. AFAIK,there is one design win for it(summer 2011) and even then the statement is vague,ie,"it will be added when available".
The machine which will be using it will be operational in 2013:
http://www.physorg.com/news/2011-09-...igital-xd.html
This makes sense if the cards will be launched sometime in 2012. Last year, 2011 was meant to be the actual launch year for Knights Corner. If the cards were already delivered to customers this year it would be a bigger deal but it isn't though.
Whereas,the cards I mentioned already do and have been out for a while . There were protoypes of the latest Tesla cards out even in late 2010 and these can hit nearly 700GFLOPs a year ago.
The Intel card mentioned is a PROTOTYPE which has not even entered production.
Big flipping deal. They won't even been out until next year. Wait a second! BOTH Nvidia and AMD are working on newer generation GPUs like Keplar and GCN which will be out early next year in quantity.
But of course since they don't canvass the international media so much as Intel, every time they make a prototype, of course none of them exist!
Last edited by CAT-THE-FIFTH; 17-11-2011 at 11:58 AM.
Surely once the initial coding is done, you are off, so the trade off isn't really much of a trade off its more just an initial milestone which isn't in the way for Intel based HPCs."Traditional supercomputers were built by putting thousands of processors in a room but in the last few years there has been a shift toward graphic processors," said Martin Reynolds, a vice president at research firm Gartner.
"GPUs allow you to get results more quickly but will take longer to program so there is an interesting trade-off," he said.
Added to which, how many GPUs can you stick in a server compared to how many co-processors can you stick in a server?
Depends on the what you're doing. We're always updating programs so creating additional development branches for non-x86 architectures adds way too much cost either for us or our collaborators (or we'd already have done it). But the programs that we do create are already designed to scale, and I think they'd need very little optimisation for this.
We don't know the thermal/power characteristics yet, but theoretically exactly the same if they both use PCI-E card factor. There's talk of KC using QPI as well which creates some further opportunities.Added to which, how many GPUs can you stick in a server compared to how many co-processors can you stick in a server?
GPGPU is neat for a few tricks, but it's way too inflexible for our uses.
Biscuit (17-11-2011)
Basically still Larabee which they're doing their best to make use of so it's not a complete waste of time and effort. Let's face it, you're not going to get 1TF worth of Sandy Bridge cores or anything close on these chips, they are going to be very cut back, simple cores and x86 doesn't tend to scale very well towards that end of things. Also, you're not going to be able to use the same programs you would use on a big chip if you hope to fully utilise what the chip has to offer. It seems it fits somewhere between RISC chips like SPARC Niagara and 'GPUs'. GPUs are so much more than graphics cards with a bodge to make them do GPGPU now, in fact the new architectures are designed pretty much the other way round. I'd be very surprised if these chips give anything like decent FLOPS/watt compared to RISC/GPU...
Edit: Oh and Tilera is another company to look up for some interesting reading IMO...
Last edited by watercooled; 17-11-2011 at 05:04 PM.
Guess you can only really use it with specialist applications specifically written for it.
Shame... imagine how it could run Battlefield 3
Made me giggle that. xD
I'm probably more critical than most about this but I'm more of the opinion, if something's worth doing, it's worth doing properly the first time. I.e. I'd rather use a different architecture if it means better performance in the long run over keeping things as they are to make things easier in the short term. It's worked well multiple times in the past e.g. multi-core, unified shaders - even though programmers really disliked programming for them as it took more effort, it really paid off. I think the consoles played a big part in that happening as soon as it did, and it's a shame to think how much longer we'd have been with single-threaded games if that hadn't been the case.
There are currently 1 users browsing this thread. (0 members and 1 guests)