Read more.1.11 million transactions per second inserted into a single database.
Read more.1.11 million transactions per second inserted into a single database.
Sounds great, but I don;t know what SQL Server can achieve on a standard piece of kit, ie raid 5/6/50/60 array. What would be the performance increase by using this bit of kit...?
A standard (I.e. non SSD) storage array at the same capacity would be much slower and much less expensive (you're looking at £50k for the 5TB storage alone), but non-SSD storage at the same price would be many times larger, and still slower, especially in these kind of workloads.
The price of these setups means they don't really get sent out to review sites like Hexus, but I'm pretty sure there are some reviews of the cheaper version of this card (still about £4k I think) around online to give you an idea.
I don't mean to sound cold, or cruel, or vicious, but I am so that's the way it comes out.
There is absolutely no way that a mechanical storage array can beat even a single consumer level SSD. RAID does not improve latency or seek/latency. You can have a building full of HDDs but that array will never have latency of less than 1ms. HDDs should have died a long time ago
Would be interesting to know the life span of those IO drives under that kind of sustained usage... little point having that massive write capability if the drives are dead in a few months!
That said I appreciate you'd never see that kind of hammering in real world daily use!
You can mitigate a lot of the cost by only using these SSD arrays for the transaction logs though.
But all this talk of disk detracts from the main jaw-dropper of the article for me.... "64-core AMD Server"
Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive
Agreed, but we're waiting on the price of SSD's to fall. E.g. if I want a two slot solution storing 1TB, then a normal SATAIII drive pairing is about £140, a paired SAS is £200, and a dual SSD is £1400!
So I'm sure we'd all like our SAN arrays to be full of SSD's (especially the folks that make those SSD's!), but in the meantime we'll just have to make do. Not that I think for one moment that in 5 years time we won't all be regarding HDD's as "quaint" - the same way that 3.5" floppy drives are seen now. Will we then be in the situation of making the same HDD complaints against SSD's when holographic optical storage gets going? Babylon5 style data crystals for everyone!
Getting back to the FusionIO box - impressive figures - like others though I can't help wondering (purely idle curiosity) how close a best-of-breed RAID or hybrid RAID (where "hot" data is either stored in RAM cache or on SSD-based cache) would get. On a very small scale folks I know who have these kinds of "hybrid" drives seem pretty pleased with them - and I've toyed with the idea of one of those (OCZ?) setups where an SSD and HDD are paired.
Generally to get to SSD performance you need a big ammount of cache and a lot of "short stroked" drives - on a domestic / Robo scale you wouldn't really be able to put something together to match it.
The hybrid approach seems to be the best at the moment , but the challenge is knowing where in your array to put a given block of data so that its on the best type of disk for the moment.
my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net
The hybrid I was looking at was http://www.ocztechnology.com/ocz-rev...ate-drive.html, though I'd have to do some card shuffling to get the darn thing in.
Only thing that really puts me off is that the quoted IOPS beats even OCZ's "MaxIOPS" SATA3 unit (120K v's 85K), so I'd be slightly concerned what'd happen if graphics card started really hammering at the same time that the disk did. Probably a stupid concern, but my knowledge ain't good enough. Oh, and there's the little matter of cost - £350+.
Note that this Hybrid uses that Dataplex software, so you (the user) doesn't need to know anything about what's going to be where. As far as I'm concerned that's optimal - while tweaking the setup might be nice from a geek-lust perspective, I've got better things to do with my time! Make a damned fine place to put the app data, (if I could afford it at the moment).
Ah yep those are great - I was thinking at the SAN layer rather than for local storage. The software does essentially what storage arrays costing hundreds of thousands attempt to do with tiering.
shame they dont do a laptop one - it would be handy for my demo lab !
my Virtualisation Blog http://jfvi.co.uk Virtualisation Podcast http://vsoup.net
There are currently 1 users browsing this thread. (0 members and 1 guests)