Page 2 of 2 FirstFirst 12
Results 17 to 29 of 29

Thread: Are RAID arrays worth it?

  1. #17
    Splash
    Guest
    Realistically if you're not prepared to either invest in more disk space or rethink your use of current disks you aren't going to be able to backup 650Gb, unless you fancy investing in something like an Ultrium drive or 2...

    I would suggest that in your situation you have a think through what data you need to keep backed up and create a seperate partition (better still an entire disk) and run a backup to that once a day/week/month/whatever is most apt for your situation. Either that or buy a shedload of DVDRs...

  2. #18
    Senior Member
    Join Date
    Sep 2005
    Posts
    587
    Thanks
    7
    Thanked
    7 times in 7 posts
    Does anybody think they'll rework the RAID standards any time in the future to allow them to be more plug & play? For instance, if you have 2 RAIDed drives, there should be some indication on the drives themselves that say "hey, i'm one part of a 2-disk RAID-0 array" so you can move them from system to system without setting them up fresh on a configuration screen and wiping all the data?

    Or AT LEAST, is one particular "real" RAID manufacturer (3ware, etc) working on something like this so I can buy a couple?

  3. #19
    Comfortably Numb directhex's Avatar
    Join Date
    Jul 2003
    Location
    /dev/urandom
    Posts
    17,074
    Thanks
    228
    Thanked
    1,026 times in 677 posts
    • directhex's system
      • Motherboard:
      • Asus ROG Strix B550-I Gaming
      • CPU:
      • Ryzen 5900x
      • Memory:
      • 64GB G.Skill Trident Z RGB
      • Storage:
      • 2TB Seagate Firecuda 520
      • Graphics card(s):
      • EVGA GeForce RTX 3080 XC3 Ultra
      • PSU:
      • EVGA SuperNOVA 850W G3
      • Case:
      • NZXT H210i
      • Operating System:
      • Ubuntu 20.04, Windows 10
      • Monitor(s):
      • LG 34GN850
      • Internet:
      • FIOS
    Quote Originally Posted by latrosicarius
    Does anybody think they'll rework the RAID standards any time in the future to allow them to be more plug & play? For instance, if you have 2 RAIDed drives, there should be some indication on the drives themselves that say "hey, i'm one part of a 2-disk RAID-0 array" so you can move them from system to system without setting them up fresh on a configuration screen and wiping all the data?

    Or AT LEAST, is one particular "real" RAID manufacturer (3ware, etc) working on something like this so I can buy a couple?
    the problem is that there isn't any "standard"

    vendors gain nothing from your proposal - in fact, it defeats vendor lock-in, allowing you to move to a competitor's product with ease (whereas if it;s hard, you might not bother).

    this is the thing about software raid solutions; the scenario you describe - "hey, i'm one part of a 2-disk RAID-0 array" - works fine on linux software raid. you can move your disks from a 3ware controller to silicon image, you could clone a PATA disc to SATA (or run one part of an array from a SATA controller and one from a SCSI controller) and it's just work.

  4. #20
    Senior Member
    Join Date
    Aug 2005
    Posts
    213
    Thanks
    0
    Thanked
    0 times in 0 posts
    Been following this thread with interest as I am about to build a terrabyte NAS server to serve media around the house. Thinking of maybe 4 x 300GB drives initially on top of the OS drive (maybe flash or USB flash for this) Will be using Debian linux as I have fair experience setting it up (no RAID experience though) as a server etc.

    Was thinking of either RAID 5 or JBOD (I think this means its just one large drive partition). If its JBOD, I can live with losing the data on 1 disk as long as the data on the other disks are ok as I can re-rip the media files as I think thats what JBOD gives you. Important data I was going to back up on an external drive periodically anyway. Would JBOD be a better bet this in this instance?

    Also would a P3 800 Mhz (with 640 Mb memory) be good enough for linux software raid? Or would a 2.8Ghz P4 be better? (I have the option of using either for my file server). The other would be a simple web server/SSH server.

  5. #21
    Senior Member
    Join Date
    Sep 2005
    Posts
    587
    Thanks
    7
    Thanked
    7 times in 7 posts
    Directhex, I disagree with that statement because if consumers want standards and interoperability, than wouldn't companies that respond to those wants be better off for attracting more customers?

    For instance, if i'm sure people who are already locked into a particular model of controller card will upgrade their cards every 4 to 6 years anyway. So they are eventually going to be looking for a "new" model. Now, if one company is offering a model that allows interoperability and plug&play standards with multiple controller cards, THAT is the company they are going to buy.

    My biggest problem with RAID controllers is this: say you have two different arrays and you need to take one out and put the other in for whatever reason. You can't do that on existing controllers... if one brand allowed people to do this, they would definitly get more customers. The reason that major servers don't do this kind of plug & play today is not because they *don't want to*, but because they have never been able to do so in the past and grew up learning other ways of managing their data. I wouldn't necessarially want to move my arrays from computer to computer each using different controller cards, but just to be able to plug & play different arrays on the SAME controller card without having to configure the arrays every time you plug in a different one, and therefore lose all the data.

    Also, what you said about the Linux box sounds like a great idea for now... If I have a Linux RAID box as the "file server" will I be able to access all those files over the network with Windows PCs? I have absolutely no experience with Linux, but I do know that some PC files aren't readable on Apples.

  6. #22
    Comfortably Numb directhex's Avatar
    Join Date
    Jul 2003
    Location
    /dev/urandom
    Posts
    17,074
    Thanks
    228
    Thanked
    1,026 times in 677 posts
    • directhex's system
      • Motherboard:
      • Asus ROG Strix B550-I Gaming
      • CPU:
      • Ryzen 5900x
      • Memory:
      • 64GB G.Skill Trident Z RGB
      • Storage:
      • 2TB Seagate Firecuda 520
      • Graphics card(s):
      • EVGA GeForce RTX 3080 XC3 Ultra
      • PSU:
      • EVGA SuperNOVA 850W G3
      • Case:
      • NZXT H210i
      • Operating System:
      • Ubuntu 20.04, Windows 10
      • Monitor(s):
      • LG 34GN850
      • Internet:
      • FIOS
    Quote Originally Posted by latrosicarius
    Directhex, I disagree with that statement because if consumers want standards and interoperability, than wouldn't companies that respond to those wants be better off for attracting more customers?
    ever heard of "microsoft"?

    For instance, if i'm sure people who are already locked into a particular model of controller card will upgrade their cards every 4 to 6 years anyway. So they are eventually going to be looking for a "new" model. Now, if one company is offering a model that allows interoperability and plug&play standards with multiple controller cards, THAT is the company they are going to buy.
    when it comes to RAID, then the home user accounts for about 0% of the market - and for corporate users, people will buy what their vendor supplies - usually 3ware or LSI

    My biggest problem with RAID controllers is this: say you have two different arrays and you need to take one out and put the other in for whatever reason. You can't do that on existing controllers... if one brand allowed people to do this, they would definitly get more customers. The reason that major servers don't do this kind of plug & play today is not because they *don't want to*, but because they have never been able to do so in the past and grew up learning other ways of managing their data. I wouldn't necessarially want to move my arrays from computer to computer each using different controller cards, but just to be able to plug & play different arrays on the SAME controller card without having to configure the arrays every time you plug in a different one, and therefore lose all the data.
    linux software raid is fine with this, if you configured it to be. mostly because linux is made by the sysadmins who want functionality - not the companies who want to force you to lock in

    Also, what you said about the Linux box sounds like a great idea for now... If I have a Linux RAID box as the "file server" will I be able to access all those files over the network with Windows PCs? I have absolutely no experience with Linux, but I do know that some PC files aren't readable on Apples.
    macos uses linux's SAMBA software - a daemon compatible with windows file sharing, for both serving and accessing. i access files on both SAMBA-on-linux and windows.

  7. #23
    Senior Member
    Join Date
    Sep 2005
    Posts
    587
    Thanks
    7
    Thanked
    7 times in 7 posts
    Quote Originally Posted by directhex
    ever heard of "microsoft"?
    Lol I guess you're right. The commerical market has all kinds of hidden agendas.

    Anyway, I love your Linux advice, I want to eventually get going a network attached storage system with at least 1-TB of slave space & another 1-TB of mirrored space, not including separate backups (which may very well be Blue-Ray or HD-DVD disks by the time I get it set up).

  8. #24
    Senior Member
    Join Date
    Nov 2005
    Posts
    501
    Thanks
    0
    Thanked
    0 times in 0 posts
    Quote Originally Posted by alexkoon
    If its JBOD, I can live with losing the data on 1 disk as long as the data on the other disks are ok as I can re-rip the media files as I think thats what JBOD gives you. Important data I was going to back up on an external drive periodically anyway. Would JBOD be a better bet this in this instance?
    JBOD stands for "Just a Bunch Of Disks" and does exactly what it says on the tin. You lose a disk, that disk is gone but everything else carries on as normal.

  9. #25
    YUKIKAZE arthurleung's Avatar
    Join Date
    Feb 2005
    Location
    Aberdeen
    Posts
    3,280
    Thanks
    8
    Thanked
    88 times in 83 posts
    • arthurleung's system
      • Motherboard:
      • Asus P5E (Rampage Formula 0902)
      • CPU:
      • Intel Core2Quad Q9550 3.6Ghz 1.2V
      • Memory:
      • A-Data DDR2-800 2x2GB CL4
      • Storage:
      • 4x1TB WD1000FYPS @ RAID5 3Ware 9500S-8 / 3x 1TB Samsung Ecogreen F2
      • Graphics card(s):
      • GeCube HD4870 512MB
      • PSU:
      • Corsair VX450
      • Case:
      • Antec P180
      • Operating System:
      • Windows Server 2008 Standard
      • Monitor(s):
      • Dell Ultrasharp 2709W + 2001FP
      • Internet:
      • Be*Unlimited 20Mbps
    I run the following configuration:
    Rig1:
    1x300 IDE (Boot) (Disk0)
    4x300 SATA, 20G from each disk RAID0-ed (Disk1-4, Array1)
    2x200 IDE, 20G from each disk RAID0-ed (Disk5-6, Array2)
    Rig2:
    1x160 IDE (Boot) (Disk7)
    4x160 SATA, 20G from each disk RAID0-ed (Disk8-11, Array3)

    I only use the windows soft-raid so I can move it around
    Performance is fine (230MB/s), and I'm now bottlenecked by the NF4 LAN and CPU. Only disadvantage is its so damn slow to copy from Array1 to Disk 1-4, vice versa so I ended up needing to copy to Array2 FIRST then Array1 for maximum speed. That way I get 40MB/s averaged to the array instead of 15MB/s (semi-random read/write)

    I tried the software SIL3114R RAID5 and its just horrible. After Init I did get like 90MB/s read/write BUT after some time it lags so badly down to 2MB/s r/w, and with the chip struggling I don't think that soft RAID5 is anywhere more reliable than my JBOD/RAID0

    Will invest in proper RAID5 (Hopefully Broadcom) when the controller come down in price.
    Last edited by arthurleung; 04-02-2006 at 02:28 AM.
    Workstation 1: Intel i7 950 @ 3.8Ghz / X58 / 12GB DDR3-1600 / HD4870 512MB / Antec P180
    Workstation 2: Intel C2Q Q9550 @ 3.6Ghz / X38 / 4GB DDR2-800 / 8400GS 512MB / Open Air
    Workstation 3: Intel Xeon X3350 @ 3.2Ghz / P35 / 4GB DDR2-800 / HD4770 512MB / Shuttle SP35P2
    HTPC: AMD Athlon X4 620 @ 2.6Ghz / 780G / 4GB DDR2-1000 / Antec Mini P180 White
    Mobile Workstation: Intel C2D T8300 @ 2.4Ghz / GM965 / 3GB DDR2-667 / DELL Inspiron 1525 / 6+6+9 Cell Battery

    Display (Monitor): DELL Ultrasharp 2709W + DELL Ultrasharp 2001FP
    Display (Projector): Epson TW-3500 1080p
    Speakers: Creative Megaworks THX550 5.1
    Headphones: Etymotic hf2 / Ultimate Ears Triple.fi 10 Pro

    Storage: 8x2TB Hitachi @ DELL PERC 6/i RAID6 / 13TB Non-RAID Across 12 HDDs
    Consoles: PS3 Slim 120GB / Xbox 360 Arcade 20GB / PS2

  10. #26
    Senior Member
    Join Date
    Sep 2005
    Posts
    587
    Thanks
    7
    Thanked
    7 times in 7 posts
    Quote Originally Posted by arthurleung
    I run the following configuration:
    Rig1:
    1x300 IDE (Boot) (Disk0)
    4x300 SATA, 20G from each disk RAID0-ed (Disk1-4, Array1)
    2x200 IDE, 20G from each disk RAID0-ed (Disk5-6, Array2)
    Just a question... why would you have a 300GB drive for a master? If you put your files on a separate slave drive, you should never need a master with more than 50 or 60 gigs

  11. #27
    Banned Smokey21's Avatar
    Join Date
    May 2005
    Location
    Stafford, Midlands
    Posts
    1,752
    Thanks
    0
    Thanked
    0 times in 0 posts
    In one word, No.

  12. #28
    YUKIKAZE arthurleung's Avatar
    Join Date
    Feb 2005
    Location
    Aberdeen
    Posts
    3,280
    Thanks
    8
    Thanked
    88 times in 83 posts
    • arthurleung's system
      • Motherboard:
      • Asus P5E (Rampage Formula 0902)
      • CPU:
      • Intel Core2Quad Q9550 3.6Ghz 1.2V
      • Memory:
      • A-Data DDR2-800 2x2GB CL4
      • Storage:
      • 4x1TB WD1000FYPS @ RAID5 3Ware 9500S-8 / 3x 1TB Samsung Ecogreen F2
      • Graphics card(s):
      • GeCube HD4870 512MB
      • PSU:
      • Corsair VX450
      • Case:
      • Antec P180
      • Operating System:
      • Windows Server 2008 Standard
      • Monitor(s):
      • Dell Ultrasharp 2709W + 2001FP
      • Internet:
      • Be*Unlimited 20Mbps
    Quote Originally Posted by latrosicarius
    Just a question... why would you have a 300GB drive for a master? If you put your files on a separate slave drive, you should never need a master with more than 50 or 60 gigs
    Last year I had only 1 rig so I only buy large drives so reduce the number of drives in my system (while having more space).

    You are definitely right that I could do with a tiny master drive, even 10G is fine, but I doubt its going to be as fast as the 300G drive, and the 300G drive is not going to be 30 times more expensive than the 10G drive.

    I have a 80G partition for OS (I admit its far too big but I don't want to change it now) and the remaining 201G on static storage.

    There are a lot of things need to be think of before you buy your HDDs
    1. Do you have many IDE/SATA ports left? (NF4 only have 4 SATA ports per board)

    2. Power problem, can your PSU spin up 10 disks at the same time? (It would be almost 2A for each 300G Maxtor)

    3. Do you actually have the space to fit that much HDDs? (i.e. 800G by 80GB drives takes 10 slots while by 400G drives it only take 2)

    4. What kind of use you're going for? For single-user non-multitasking a single drive would be adequate, if you tax your I/O heavily but only a small scale then a couple of small (or faster) drives would do. If you're doing static nearline storage then you're better off with gigantic drives. My usage is somehow between the last two, and I thought that I could kill 2 birds with 1 stone.

    5. Price. Consider a 80G drive cost about 35 quids and a 300G cost about 80 quids, if you want the space then obviously a 300G will be better. If you feel that you won't need 80G for a couple of years that the cheaper one will do the job fine. The 300G should run slightly (a couple of %) faster
    Workstation 1: Intel i7 950 @ 3.8Ghz / X58 / 12GB DDR3-1600 / HD4870 512MB / Antec P180
    Workstation 2: Intel C2Q Q9550 @ 3.6Ghz / X38 / 4GB DDR2-800 / 8400GS 512MB / Open Air
    Workstation 3: Intel Xeon X3350 @ 3.2Ghz / P35 / 4GB DDR2-800 / HD4770 512MB / Shuttle SP35P2
    HTPC: AMD Athlon X4 620 @ 2.6Ghz / 780G / 4GB DDR2-1000 / Antec Mini P180 White
    Mobile Workstation: Intel C2D T8300 @ 2.4Ghz / GM965 / 3GB DDR2-667 / DELL Inspiron 1525 / 6+6+9 Cell Battery

    Display (Monitor): DELL Ultrasharp 2709W + DELL Ultrasharp 2001FP
    Display (Projector): Epson TW-3500 1080p
    Speakers: Creative Megaworks THX550 5.1
    Headphones: Etymotic hf2 / Ultimate Ears Triple.fi 10 Pro

    Storage: 8x2TB Hitachi @ DELL PERC 6/i RAID6 / 13TB Non-RAID Across 12 HDDs
    Consoles: PS3 Slim 120GB / Xbox 360 Arcade 20GB / PS2

  13. #29
    Senior Member
    Join Date
    Sep 2005
    Posts
    587
    Thanks
    7
    Thanked
    7 times in 7 posts
    Wow, lol you have certainly thoroughly thought out your reasons for having that drive!

    Personally, I try to get 10000RPM drives like Western Digital's "Raptor" for the master drives. Then I like to use the slower, 7200RPM, 300GB drives as file slaves.

    The 36GB Raptor is fine for most people to use as masters, but I prefer the 74GB version b/c I have A LOT of programs and games installed, and I’m up to about 40 gigs now.

    There's a new 150GB Raptor (still 10000RPM) but I passed up on it when upgrading one of my other systems in favor of the tried-&-true 74GB version because the extra size is absolutely useless (to me) because I typically slave all my files. Now, there are also new features such as NCQ and a 16MB cache on this new 150GB Raptor, but, as they would be the only net usability difference, I didn't feel the extra price was justified.

    But I definitly see your POV with your drive setup. Cheers

Page 2 of 2 FirstFirst 12

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Raid
    By satan in forum PC Hardware and Components
    Replies: 1
    Last Post: 01-12-2005, 12:07 AM
  2. PCI Raid 3 Controller - Worth the move?
    By GingerNinja.net in forum PC Hardware and Components
    Replies: 9
    Last Post: 12-09-2005, 04:48 PM
  3. Silicon Image 3114 RAID controller problems
    By Daneel in forum PC Hardware and Components
    Replies: 4
    Last Post: 01-09-2005, 07:34 PM
  4. SATA RAID arrays - A quick question
    By gobbo in forum PC Hardware and Components
    Replies: 19
    Last Post: 26-11-2004, 12:52 PM
  5. To RAID or not to RAID?
    By GingerNinja.net in forum PC Hardware and Components
    Replies: 15
    Last Post: 27-09-2004, 11:58 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •