Page 2 of 4 FirstFirst 1234 LastLast
Results 17 to 32 of 55

Thread: Home Server Chit Chat

  1. #17
    Gentoo Ricer
    Join Date
    Jan 2005
    Location
    Galway
    Posts
    11,048
    Thanks
    1,016
    Thanked
    944 times in 704 posts
    • aidanjt's system
      • Motherboard:
      • Asus Strix Z370-G
      • CPU:
      • Intel i7-8700K
      • Memory:
      • 2x8GB Corsiar LPX 3000C15
      • Storage:
      • 500GB Samsung 960 EVO
      • Graphics card(s):
      • EVGA GTX 970 SC ACX 2.0
      • PSU:
      • EVGA G3 750W
      • Case:
      • Fractal Design Define C Mini
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • Asus MG279Q
      • Internet:
      • 240mbps Virgin Cable

    Re: Home Server Chit Chat

    Quote Originally Posted by b0redom View Post
    I had more problems with Linux RAID (I used OpenFiler for a bit). I had what turned out to be a duff SATA cable which would sometimes fail a drive on reboot, so the drives would keep reorder themselves.
    I've never had any problems with md and I've been running it for a decade. And you can hardly blame the software for faulty wiring and ostensibly poor vigilance (mdadm can be configured as a notification daemon, and events like disk dropouts would be emailed to you). Oh, and unlike ZFS, md can increase its volume size simply by throwing a new disk into the RAID set, instead of having to replace the entire set of drives one by one and waiting for a rebuild for each and every one. On top of that, with MD, even if you reach the limit of SATA ports/drive bays on your system, and start swapping disks out one by one, the spare space on the new drives can be immediately utilised by creating a second md stripe set on the free partition space and pooled with LVM (Synology do just that with their SHR/SHR2 auto-managed volumes).

    And that's the problem with ZFS, when Sun was developing ZFS, it was fundamentally designed for data centre environments where the filesystem typically interacts with and pools together expensive auto-replicated SAN allocations or at the very least a HBA with dozens of drive bays available and HDDs coming out of your ears... the kind of scenario where the cost of drives is no object, and detecting data corruption is of primary importance . Home server use cases aren't even an afterthought for Oracle.
    Quote Originally Posted by Agent View Post
    ...every time Creative bring out a new card range their advertising makes it sound like they have discovered a way to insert a thousand Chuck Norris super dwarfs in your ears...

  2. #18
    mush-mushroom b0redom's Avatar
    Join Date
    Oct 2005
    Location
    Middlesex
    Posts
    3,494
    Thanks
    195
    Thanked
    383 times in 292 posts
    • b0redom's system
      • Motherboard:
      • Some iMac thingy
      • CPU:
      • 3.4Ghz Quad Core i7
      • Memory:
      • 24GB
      • Storage:
      • 3TB Fusion Drive
      • Graphics card(s):
      • nViidia GTX 680MX
      • PSU:
      • Some iMac thingy
      • Case:
      • Late 2012 pointlessly thin iMac enclosure
      • Operating System:
      • OSX 10.8 / Win 7 Pro
      • Monitor(s):
      • Dell 2713H
      • Internet:
      • Be+

    Re: Home Server Chit Chat

    What are you talking about? You can add drives to ZFS and they're immediately available. They don't even need to be the same size as the rest of the array (although personally I think you'd be nuts to do that). ZFS is more tolerant in that you can connect up the drives in ANY order and it'll still work.

  3. #19
    Gentoo Ricer
    Join Date
    Jan 2005
    Location
    Galway
    Posts
    11,048
    Thanks
    1,016
    Thanked
    944 times in 704 posts
    • aidanjt's system
      • Motherboard:
      • Asus Strix Z370-G
      • CPU:
      • Intel i7-8700K
      • Memory:
      • 2x8GB Corsiar LPX 3000C15
      • Storage:
      • 500GB Samsung 960 EVO
      • Graphics card(s):
      • EVGA GTX 970 SC ACX 2.0
      • PSU:
      • EVGA G3 750W
      • Case:
      • Fractal Design Define C Mini
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • Asus MG279Q
      • Internet:
      • 240mbps Virgin Cable

    Re: Home Server Chit Chat

    Quote Originally Posted by b0redom View Post
    What are you talking about? You can add drives to ZFS and they're immediately available. They don't even need to be the same size as the rest of the array (although personally I think you'd be nuts to do that). ZFS is more tolerant in that you can connect up the drives in ANY order and it'll still work.
    You can 'add drives' as a JBOD, as you would to an LVM volume group, but there's no redundancy at all. Dead vdev/drive = dead zpool. Again, not a problem for enterprises, because a good SAN will provide HA and handle drive failure transparently and independently of the filesystems using SAN allocations. But it's a terrible idea for a home server.
    Quote Originally Posted by Agent View Post
    ...every time Creative bring out a new card range their advertising makes it sound like they have discovered a way to insert a thousand Chuck Norris super dwarfs in your ears...

  4. #20
    mush-mushroom b0redom's Avatar
    Join Date
    Oct 2005
    Location
    Middlesex
    Posts
    3,494
    Thanks
    195
    Thanked
    383 times in 292 posts
    • b0redom's system
      • Motherboard:
      • Some iMac thingy
      • CPU:
      • 3.4Ghz Quad Core i7
      • Memory:
      • 24GB
      • Storage:
      • 3TB Fusion Drive
      • Graphics card(s):
      • nViidia GTX 680MX
      • PSU:
      • Some iMac thingy
      • Case:
      • Late 2012 pointlessly thin iMac enclosure
      • Operating System:
      • OSX 10.8 / Win 7 Pro
      • Monitor(s):
      • Dell 2713H
      • Internet:
      • Be+

    Re: Home Server Chit Chat

    Quote Originally Posted by aidanjt View Post
    You can 'add drives' as a JBOD, as you would to an LVM volume group, but there's no redundancy at all. Dead vdev/drive = dead zpool. Again, not a problem for enterprises, because a good SAN will provide HA and handle drive failure transparently and independently of the filesystems using SAN allocations. But it's a terrible idea for a home server.
    Of course there's redundancy. No one in their right minds would create a RAID-0 array for data it just doesn't make sense.

    I'm not even sure if that's possible in the GUI of FreeNAS. You use RAID-Z (RAID-5ish) or RAID-Z2 (RAID-6ish). Which give you the same level of redundancy as a well managed Linux array.

  5. #21
    The late but legendary peterb - Onward and Upward peterb's Avatar
    Join Date
    Aug 2005
    Location
    Looking down & checking on swearing
    Posts
    19,378
    Thanks
    2,892
    Thanked
    3,403 times in 2,693 posts

    Re: Home Server Chit Chat

    It isn't really relevant to talk about conventional RAID technology as ZFS is designed to be a highly resilient filesystem anyway. It does have its own version of RAID5 and 6 like features, including one that allows for three disk failures, which might be useful in large arrays where array rebuilding from a failure may take some time.

    ZFS isn't really something I have looked at (it isn't available natively on many Linux distress because of patent encumbrances) but it does have some pretty nifty features that might make it suitable for a home server.

    However, I like well tried and tested solutions, so I'll probably be sticking with ext3/4, RAID1 and offline backup.
    (\__/)
    (='.'=)
    (")_(")

    Been helped or just 'Like' a post? Use the Thanks button!
    My broadband speed - 750 Meganibbles/minute

  6. #22
    Gentoo Ricer
    Join Date
    Jan 2005
    Location
    Galway
    Posts
    11,048
    Thanks
    1,016
    Thanked
    944 times in 704 posts
    • aidanjt's system
      • Motherboard:
      • Asus Strix Z370-G
      • CPU:
      • Intel i7-8700K
      • Memory:
      • 2x8GB Corsiar LPX 3000C15
      • Storage:
      • 500GB Samsung 960 EVO
      • Graphics card(s):
      • EVGA GTX 970 SC ACX 2.0
      • PSU:
      • EVGA G3 750W
      • Case:
      • Fractal Design Define C Mini
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • Asus MG279Q
      • Internet:
      • 240mbps Virgin Cable

    Re: Home Server Chit Chat

    Quote Originally Posted by b0redom View Post
    Of course there's redundancy. No one in their right minds would create a RAID-0 array for data it just doesn't make sense.

    I'm not even sure if that's possible in the GUI of FreeNAS. You use RAID-Z (RAID-5ish) or RAID-Z2 (RAID-6ish). Which give you the same level of redundancy as a well managed Linux array.
    There isn't redundancy if you add a disk to the zpool, because the zpool only pools storage devices, it's effectively a JBOD volume. Redundancy only exists in the vdevs via mirroring, raid-z, or raid-z2, which aren't expandable by adding disks. So to redundantly 'expand' ZFS, you have to insert the minimum number of disks for the redundancy technology you choose, or replace every drive. And you can't reshape the redundancy type. Another thing MD has no problem at all doing (e.g. single disk->raid1->raid5->raid6). The power, performance, and flexibility of MD is insane, and that's before you put LVM on top of it.

    There's aspects of ZFS I really love, but it really isn't well suited for consumer home networks with ghetto server rigs, it's just too rigid. I really do hope they fix that some time in the near future though.
    Quote Originally Posted by Agent View Post
    ...every time Creative bring out a new card range their advertising makes it sound like they have discovered a way to insert a thousand Chuck Norris super dwarfs in your ears...

  7. #23
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    12,986
    Thanks
    781
    Thanked
    1,588 times in 1,343 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Home Server Chit Chat

    Quote Originally Posted by aidanjt View Post
    There's aspects of ZFS I really love, but it really isn't well suited for consumer home networks with ghetto server rigs, it's just too rigid. I really do hope they fix that some time in the near future though.
    I'm hoping that btrfs will be more tuned to our home servers. If it ever gets up to a release version.

    Good zfs/btrfs primer for anyone reading this and wondering what we are talking about: http://arstechnica.com/information-t...n-filesystems/

    Still on md here for now.

    Quote Originally Posted by peterb View Post
    it isn't available natively on many Linux distress because of patent encumbrances
    Really?? I though it was just the BSD license, which is certainly enough to keep it out of most distros.

  8. #24
    Anthropomorphic Personification shaithis's Avatar
    Join Date
    Apr 2004
    Location
    The Last Aerie
    Posts
    10,857
    Thanks
    645
    Thanked
    872 times in 736 posts
    • shaithis's system
      • Motherboard:
      • Asus P8Z77 WS
      • CPU:
      • i7 3770k @ 4.5GHz
      • Memory:
      • 32GB HyperX 1866
      • Storage:
      • Lots!
      • Graphics card(s):
      • Sapphire Fury X
      • PSU:
      • Corsair HX850
      • Case:
      • Corsair 600T (White)
      • Operating System:
      • Windows 10 x64
      • Monitor(s):
      • 2 x Dell 3007
      • Internet:
      • Zen 80Mb Fibre

    Re: Home Server Chit Chat

    I find 2GB for FreeNAS more then enough as long as you don't turn dedupe on (and I wouldn't turn it on if you value your data!)

    I have 10GB RAM in mine from when I had dedupe enabled but would be quite happy dropping to 4GB or even 2GB now. Only when a number of clients are accessing files does the RAM really get used for on-the-fly caching.....a general rule of thumb seems to be 1GB RAM for each 1TB of available ZFS storage for optimum performance.
    Main PC: Asus Rampage IV Extreme / 3960X@4.5GHz / Antec H1200 Pro / 32GB DDR3-1866 Quad Channel / Sapphire Fury X / Areca 1680 / 850W EVGA SuperNOVA Gold 2 / Corsair 600T / 2x Dell 3007 / 4 x 250GB SSD + 2 x 80GB SSD / 4 x 1TB HDD (RAID 10) / Windows 10 Pro, Yosemite & Ubuntu
    HTPC: AsRock Z77 Pro 4 / 3770K@4.2GHz / 24GB / GTX 1080 / SST-LC20 / Antec TP-550 / Hisense 65k5510 4K TV / HTC Vive / 2 x 240GB SSD + 12TB HDD Space / Race Seat / Logitech G29 / Win 10 Pro
    HTPC2: Asus AM1I-A / 5150 / 4GB / Corsair Force 3 240GB / Silverstone SST-ML05B + ST30SF / Samsung UE60H6200 TV / Windows 10 Pro
    Spare/Loaner: Gigabyte EX58-UD5 / i950 / 12GB / HD7870 / Corsair 300R / Silverpower 700W modular
    NAS 1: HP N40L / 12GB ECC RAM / 2 x 3TB Arrays || NAS 2: Dell PowerEdge T110 II / 24GB ECC RAM / 2 x 3TB Hybrid arrays || Network:Buffalo WZR-1166DHP w/DD-WRT + HP ProCurve 1800-24G
    Laptop: Dell Precision 5510 Printer: HP CP1515n || Phone: Huawei P30 || Other: Samsung Galaxy Tab 4 Pro 10.1 CM14 / Playstation 4 + G29 + 2TB Hybrid drive

  9. #25
    Gentoo Ricer
    Join Date
    Jan 2005
    Location
    Galway
    Posts
    11,048
    Thanks
    1,016
    Thanked
    944 times in 704 posts
    • aidanjt's system
      • Motherboard:
      • Asus Strix Z370-G
      • CPU:
      • Intel i7-8700K
      • Memory:
      • 2x8GB Corsiar LPX 3000C15
      • Storage:
      • 500GB Samsung 960 EVO
      • Graphics card(s):
      • EVGA GTX 970 SC ACX 2.0
      • PSU:
      • EVGA G3 750W
      • Case:
      • Fractal Design Define C Mini
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • Asus MG279Q
      • Internet:
      • 240mbps Virgin Cable

    Re: Home Server Chit Chat

    I'm still waiting on BtrFS's RAID5 implementation to be as functionally useful as md. But at least the filesystem is getting in a reasonable state now, performance is on par with the more mature filesystems as well, and I'm definitely looking forward to having an error-checking copy-on-write filesystem.
    Quote Originally Posted by Agent View Post
    ...every time Creative bring out a new card range their advertising makes it sound like they have discovered a way to insert a thousand Chuck Norris super dwarfs in your ears...

  10. #26
    mush-mushroom b0redom's Avatar
    Join Date
    Oct 2005
    Location
    Middlesex
    Posts
    3,494
    Thanks
    195
    Thanked
    383 times in 292 posts
    • b0redom's system
      • Motherboard:
      • Some iMac thingy
      • CPU:
      • 3.4Ghz Quad Core i7
      • Memory:
      • 24GB
      • Storage:
      • 3TB Fusion Drive
      • Graphics card(s):
      • nViidia GTX 680MX
      • PSU:
      • Some iMac thingy
      • Case:
      • Late 2012 pointlessly thin iMac enclosure
      • Operating System:
      • OSX 10.8 / Win 7 Pro
      • Monitor(s):
      • Dell 2713H
      • Internet:
      • Be+

    Re: Home Server Chit Chat

    Quote Originally Posted by aidanjt View Post
    There isn't redundancy if you add a disk to the zpool, because the zpool only pools storage devices, it's effectively a JBOD volume. Redundancy only exists in the vdevs via mirroring, raid-z, or raid-z2, which aren't expandable by adding disks.
    So you're saying that you think you can't add a disk to an existing RAID-Z array and increase the size of that RAID-Z array accordingly? I may be wrong, but I'm pretty sure that works.

    So to redundantly 'expand' ZFS, you have to insert the minimum number of disks for the redundancy technology you choose, or replace every drive. And you can't reshape the redundancy type. Another thing MD has no problem at all doing (e.g. single disk->raid1->raid5->raid6).
    Not sure how many people would want to do that anyway.....

  11. #27
    mush-mushroom b0redom's Avatar
    Join Date
    Oct 2005
    Location
    Middlesex
    Posts
    3,494
    Thanks
    195
    Thanked
    383 times in 292 posts
    • b0redom's system
      • Motherboard:
      • Some iMac thingy
      • CPU:
      • 3.4Ghz Quad Core i7
      • Memory:
      • 24GB
      • Storage:
      • 3TB Fusion Drive
      • Graphics card(s):
      • nViidia GTX 680MX
      • PSU:
      • Some iMac thingy
      • Case:
      • Late 2012 pointlessly thin iMac enclosure
      • Operating System:
      • OSX 10.8 / Win 7 Pro
      • Monitor(s):
      • Dell 2713H
      • Internet:
      • Be+

    Re: Home Server Chit Chat

    Really?? I though it was just the BSD license, which is certainly enough to keep it out of most distros.
    No, it's been released under CDDL, which apparently isn't compatible with the GPL.

  12. #28
    Gentoo Ricer
    Join Date
    Jan 2005
    Location
    Galway
    Posts
    11,048
    Thanks
    1,016
    Thanked
    944 times in 704 posts
    • aidanjt's system
      • Motherboard:
      • Asus Strix Z370-G
      • CPU:
      • Intel i7-8700K
      • Memory:
      • 2x8GB Corsiar LPX 3000C15
      • Storage:
      • 500GB Samsung 960 EVO
      • Graphics card(s):
      • EVGA GTX 970 SC ACX 2.0
      • PSU:
      • EVGA G3 750W
      • Case:
      • Fractal Design Define C Mini
      • Operating System:
      • Windows 10 Pro
      • Monitor(s):
      • Asus MG279Q
      • Internet:
      • 240mbps Virgin Cable

    Re: Home Server Chit Chat

    Quote Originally Posted by b0redom View Post
    So you're saying that you think you can't add a disk to an existing RAID-Z array and increase the size of that RAID-Z array accordingly? I may be wrong, but I'm pretty sure that works.
    Nope. Once you create the raidz vdev, it's fixed. Only the zpool can be expanded.

    Quote Originally Posted by b0redom View Post
    Not sure how many people would want to do that anyway.....
    Far more than you'd think, it's the basis for Synology's SHR auto-managed volume mode. It's extremely useful when you're adding disks as you can afford them. Say you put together a new machine, you only have 2 disks at that point, so you make do with RAID1, you add another disk 2 weeks later and you can reshape the array to RAID5, increasing your volume size. Another 2 weeks you get another drive, and decide RAID5 is still a little risky so you change it to a RAID6. The point is, you can dynamically manage your disks as and when your needs arise. You don't have to wait until everything is 'just so' for file system.
    Quote Originally Posted by Agent View Post
    ...every time Creative bring out a new card range their advertising makes it sound like they have discovered a way to insert a thousand Chuck Norris super dwarfs in your ears...

  13. #29
    The late but legendary peterb - Onward and Upward peterb's Avatar
    Join Date
    Aug 2005
    Location
    Looking down & checking on swearing
    Posts
    19,378
    Thanks
    2,892
    Thanked
    3,403 times in 2,693 posts

    Re: Home Server Chit Chat

    Quote Originally Posted by b0redom View Post
    Not sure how many people would want to do that anyway.....
    If you mean change the shape/type of array, that can be useful. You can start with a single drive RID1 (i.e. degraded) add the second drive, then as storage requirements increase, add additional drives and migrate to RAID5 or 6. True, it isn't an everyday requirement, but it can be useful.

    Resizing an array under MDAM is pretty useful too. Add a bigger drive, rebuild the array. Add a second bigger drive, rebuild the array. Resize the array and filesystem and job done

    I'm not saying that resizing can't be done under the ZZFS toolset, just that MDADM is very powerful for the more 'standard' filesystems, and to my mind, knocks spots off the hybrid motherboard/software solutions, and unless you go high end RAID controller cards, gives full hardware RAID a pretty good run for the money. But then, it seems that it isn't recommended to use 'conventional' RAID hardware on ZFS systems anyway.

    Quote Originally Posted by b0redom View Post
    No, it's been released under CDDL, which apparently isn't compatible with the GPL.
    Yes, that a what I thought.
    (\__/)
    (='.'=)
    (")_(")

    Been helped or just 'Like' a post? Use the Thanks button!
    My broadband speed - 750 Meganibbles/minute

  14. #30
    Jay
    Jay is offline
    Gentlemen.. we're history Jay's Avatar
    Join Date
    Aug 2006
    Location
    Jita
    Posts
    8,365
    Thanks
    304
    Thanked
    568 times in 409 posts

    Re: Home Server Chit Chat

    I have 2 N54L with 16GB RAM in each running Vmware ESXi 5.1, I have a QNAP NAS for shared storage and use them with DRS, VMotion etc. They work very well.
    □ΞVΞ□

  15. #31
    Dark side super agent
    Join Date
    Dec 2003
    Location
    Nirvana
    Posts
    1,895
    Thanks
    72
    Thanked
    99 times in 89 posts

    Re: Home Server Chit Chat

    I run an N40L with 8Gb memory. I've got ESXi installed on a USB thumb drive, 4x2Tb hard drives internally and a 250Gb eSATA drive for the VMs. I run a Xpenology VM which has pass through access to the 2Tb drives and a Windows 7 VM which I need to run an particular piece of software. All this runs smooth as a superfluid glidey thing. ESXi was a clamber up a learning curve but now I've got it running, I'm quite take a back at how smoothly everything runs on such a relatively low powered machine.

    All this is a long winded way of saying grab a Microserver, install ESXi and have fun playing with it!
    An Atlantean Triumvirate, Ghosts of the Past, The Centre Cannot Hold
    The Pillars of Britain, Foundations of the Reich, Cracks in the Pillars.

    My books are available here for Amazon Kindle. Feedback always welcome!

  16. #32
    Splash
    Guest

    Re: Home Server Chit Chat

    An N36l (w 16Gb) running ESXi hosts my vCenter and vCOPS VMs, an N40l (w 16Gb) running Windows Server 2012 as my file/media server, and HP ML115 G5 (w 8Gb) running FreeNAS as my iSCSI storage server, another ML115 G5 currently powered off as it was too cheap to pass up, so it's there as a cold spare and a Thecus N5200PRO that acts as my local backup repo. 3 Intel NUCs (w 16Gb) running ESXi and... that's about it.


    Don't mention electricity to the missus.

Page 2 of 4 FirstFirst 1234 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •