Hi I was comparing price and seems I can make a 4 disk machanical raid array (RAID 10 ) when spending for 1tb SSD. So my questin is which will be faster. Is there a lot of diference?
Also how about hybrid drive Raid 10 vs single SSD?
Hi I was comparing price and seems I can make a 4 disk machanical raid array (RAID 10 ) when spending for 1tb SSD. So my questin is which will be faster. Is there a lot of diference?
Also how about hybrid drive Raid 10 vs single SSD?
Raid 10 should give you a greater linear throughput, some redundancy and greater capacity.
The SSD will give you much faster random reads with no redundancy.
Personally I'd do OS on a smaller SSD + everything else on spinning disks.
There aren't many situations where RAID is a great solution for desktop use anyway. Perhaps RAID 0 if you need e.g. higher continuous throughput than a single disk can offer if you're doing something like raw video capture, but it's important to remember that the disk-level redundancy offered by RAID is not a substitute for backup.
If the 1TB SSD would be a PCIe mounted SSD or 4 x 240/256GB SSDs in a RAID, would be the solution greatly improved.
Also depends what kind of HDD you were compare to. 1TB SSD would work out about £300 in UK, 4 x 240/256GB SSD would also work out about £300 (£75 for a 256GB Crucial MX100). That kind of money would get you 4 x 3TB HDDs. The capacity would be greatly increased, but as Agent said, depends what you need it for?
Systems is Windows 8 64 bit 8gb Intel core i3 3220.
I am a coder , have to work with database manipulation , some times , database creation and recreation get time and testing is so difficult when it comes to frequent.
and also looking for
*. Improved boot.
*. Switch faster between heavy loaded DEV tools like Netbeans and Eclips.
What I've found with two striped drives is that transfer speeds were a little faster than one dive but POST times take longer as there is an extra stage to go through at startup and the drives still need to seek data.
If you go for striped and mirrored then you will have a system that only uses two of the drives and the other two will simply mirror them.
Striped with parity will allow you to use all four drives and everytime you write new data it will alternate which of the drives stores the parity data that can rebuild a missing volume should the drive fail. This is the solution that would give you the best mix between performance and redundancy, IMHO. But you will only see better performance under certain circumstances most notably a sequence of random reads would work out taking longer than a single SSD due to seek times when loading as the heads need to be moved from file to file.
I think, ultimately you would benefit more from a single SSD or an SSD for boot drive and then either a second SSD or a single HDD for data.
How large are the tables?
If you are testing, why not upgrade your ram and dump the entire thing there
Anandtech did a test years ago where they faced off a big raid array of rotating media against an SSD or two. Might be worth you trying to find it, but from memory the main data could be on rotating media but for performance the index tables had to be on SSD. If you only have one drive in the system, then it sounds like you want it to be SSD (or about 10 drives in a rotating disk raid array).
It was a while ago, but since then SSDs have gotten much faster and larger, rotating media hasn't moved on as much so I think the only thing that might have changed now is that with bigger SSDs it might not be worth using rotating disks at all for most database tasks.
Also I am curious , now days any Servers use SSD as their storage or still old fashion Mechanical Raid.
Sounds like if you were hitting that database a lot a hybrid might actually work quite nicely? The indexes would potentially get cached as they'll be small enough to fit?
One thing I will say, though, is rebuilding indexes is a great way to speed up a db if you have either a lot of data changes, or a regular bulk import. I remember in one job I had to rebuild the index to one table after every bulk upload or a 1s query would start timing out....
The company I work for still sells lots of server storage using spinning disks, but with hundreds of drives. If you need that many drives to get the storage you require, then the throughput from that many drives is still pretty good. Even there though, the market seems to be moving over to SSD.
Most servers these days are hosted on VM farms so they don't have drives as such, they have a file on a NAS.
For testing use, just get an SSD and be done with it. Unless of course the production system will have spinning disks, in which case as a developer you should feel the pain of the production system else what you make will die horribly when you try to deploy it.
There are currently 1 users browsing this thread. (0 members and 1 guests)