What the heck is a striped hard drive? I see it in spec lists and it just confuses me...
Printable View
What the heck is a striped hard drive? I see it in spec lists and it just confuses me...
Its basically 2 HD's acting as one. An over simplification would be thinking of a file getting written 50% to one disk and 50% to the other.
When reading back, in theory in doubles the possible read rate. In real world, its gain isn’t massive.
Im quite a fan of RAID 0, but it needs to be used in the right situation. You won’t need it for general desktop usage.
I'm currently using Raid 0 and previously didn't have it enabled, there is a noticable difference, load times for games are much quicker, I reckon windows starts faster and general navigation through files is quicker.
I went from two 36gb raptors, striped, to a T7K250, mostly for space reasons.
For me, the performance difference in real world terms was negligible, though HDTach and other benchmarking tools obviously reported much lower scores.
Nox
Oh, and much less hassle for dual booting windows and Linux :D
Nox
striping a hard drive is really putting all of your eggs in one basket , if you have a failure on any disk in the stripe set then the whole set is lost.
Is it rerally worth doubling your risk for a minimal increase in performance ?
I bought those 2 Raptors off Nox and they're currently in RAID 0, and backed up every night, however after Xmas I plan to invest in some more discs and go RAID 5 for proper redundancy.
Hmm i was just talking to someone in another thread saying go for RAID-5, but i didn't say something very important, get a decent controller card. I've no experiance of the SATA ones (because raptors seam like diet coke to me, 10k rpm, not evil enough), but its well worth paying the extra for a good controller, with SCSI ones there is such a big difference between no name cards and say adaptec. I run with a 2010S (its zero channel, its like a RAID ugprade for my motherboard). The software is very good, and performance is staggering (through put from 5 drives, with 1 parity, is over 3 times the speed of one drive). But its quite hard to gauge exactly because it also has 48meg of cache. Lots of cache is important on RAID cards because if one drive has a stall, takes a little bit longer than it should to write some data, you will be hit badly, because as far as the OS is concerned all drives have stalled. When you start adding large number of drives this becomes a problem as it starts to happen quite often!
But if your skint, you can use a software RAID. Now whilst people imediatly jump and start winging about CPU time been wasted, think about it for most tasks (gaming, office applications) you dont need complex CPU operations to be done at the same time as the HDD access, the bottleneck is entirely with the HDDs. This is true even with video compression, as most of the code goes, read to RAM do maths on RAM write to disk. As such the CPU isn't been used when its been written to disk, because until that frame has been freed from the RAM the encoder dosen't want to use the CPU because it has nothing for it to do.
Software RAID is quite simple with NT and BSD, but when its your OS drive, its a bit tricky to get setup in the first place without another hard-drive or a live CD. Linux is terrible performance compared to BSD in this, i happily ran a game server for our house like this, using two idendical 40gig HDDs i'd liberated from 2 old ex-school computers.
I don't call that terrible.Code:genbox conf.d # hdparm -t /dev/hdk
/dev/hdk:
Timing buffered disk reads: 176 MB in 3.02 seconds = 58.28 MB/sec
genbox conf.d # hdparm -t /dev/md0
/dev/md0:
Timing buffered disk reads: 278 MB in 3.00 seconds = 92.67 MB/sec
RAID0 gives high sequential reads yes, but it doesn't really make typical access patterns much faster.
The box i had was a PIII 933mhz box. Under BSD the disk access was about 10% better than linux in real world environment. Try it, you will most likely find it much faster in BSD (this is to do with its better support for threading handling, i wasn't using 2.6 when i did this, that might make a difference (heh, linux finally got something which even windows 95 *shudders* had)).Quote:
Originally Posted by aidanjt
You really need to watch what happens to any code thats running, a (fair?) simple test program could go like
Code:Read data into buffer
exception after 1 second
xor data with tail of data
have a few nops
write data back
has a minuite passed? escape if it has
loop to start
see how many MB you can get written in a min