Page 2 of 2 FirstFirst 12
Results 17 to 23 of 23

Thread: Backblaze shares 2017 HDD reliability stats

  1. #17
    Member
    Join Date
    May 2009
    Location
    Where you are not
    Posts
    190
    Thanks
    13
    Thanked
    13 times in 9 posts
    • Iota's system
      • Motherboard:
      • GA-P67A-UD5-B3
      • CPU:
      • Intel Core i7 2600K
      • Memory:
      • 2 x BL2KIT25664FN2139
      • Storage:
      • 4 x CTFDDAC064MAG-1G1 (Raid 0)
      • Graphics card(s):
      • ASUS Radeon R9 290 DC-2
      • PSU:
      • Corsair Professional Series Gold AX750
      • Case:
      • Lian Li PC-X500B
      • Operating System:
      • Windows 10 Pro 64-bit
      • Monitor(s):
      • 2x Samsung 22" widescreen P2270 2ms DVI HD LCD TFT Ecofit
      • Internet:
      • 40Mbps SKY Fibre

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by peterb View Post
    RAID is about resilience and maintaining uptime allowing a failing drive to be swapped out with minimal service interruption (the exception is RAID0 which just doubles the risk of data loss as failure of one drive can affect the data on both). They are not backup substitutes.

    SSD don’t alter that - in fact they increase the need for backup because when they fail, they are (at present) more likely to fail catastrophically rather than giving early warning signs.
    Oh don't get me wrong, I'm well aware of the reasoning for RAID for resilience and maintaining uptime, especially in a database use scenario amongst other uses. Equally it definitely isn't a substitute for regular backups as you pointed out. As for RAID 0 increasing the risk of data loss, that's something I'd challenge (regardless of the mathematics), a single drive dying is just as likely as a single drive working alongside another dying.

    Someone linked this recently - https://techreport.com/review/26523/...-to-a-petabyte, although SSD may fail catastrophically, of those tested, they all surpassed their endurance specifications with ease, SMART reports the status of the drive (which can be easily be read in programs like HWInfo.). As long as you're not expecting to keep going past the endurance specs, I wouldn't imagine they're any less reliable than spinning plates of rust (in my albeit anecdotal experience so far, they're more reliable).

  2. Received thanks from:

    Millennium (07-02-2018)

  3. #18
    Admin Team peterb's Avatar
    Join Date
    Aug 2005
    Location
    Southampton
    Posts
    17,385
    Thanks
    2,260
    Thanked
    2,820 times in 2,253 posts
    • peterb's system
      • Motherboard:
      • Nascom 2
      • CPU:
      • Z80B
      • Memory:
      • 48K 8 bit memory on separate card
      • Storage:
      • Audio cassette tape - home built 5.25" floppy drive
      • Graphics card(s):
      • text output (composite video)
      • PSU:
      • Home built
      • Case:
      • Home built
      • Operating System:
      • Nas-sys
      • Monitor(s):
      • 12" monocrome composite video input
      • Internet:
      • No networking capability on this machine

    Re: Backblaze shares 2017 HDD reliability stats

    WRT to RAID0 my reasoning is that if each drive has an my mtbf of (say) 100 hours then the mtbf of 2 is 50 hours and as failure of either will probably result in the loss of data, the risk is doubled.

    However I will add that stats is not my strong subject so I’m open to having my logic challenged!
    (\__/)
    (='.'=)
    (")_(")

    Been helped or just 'Like' a post? Use the Thanks button!
    My broadband speed - 750 Meganibbles/minute

  4. #19
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    8,149
    Thanks
    335
    Thanked
    776 times in 671 posts
    • DanceswithUnix's system
      • Motherboard:
      • M5A-97 EVO R2.0
      • CPU:
      • FX-8350
      • Memory:
      • 16GB ECC 1333
      • Storage:
      • 660GB Linux, 500GB Games (Win 10)
      • Graphics card(s):
      • Sapphire Nitro R9 380 4GB
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 24 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Samsung 2343BW 2048x1152
      • Internet:
      • 80Mb/20Mb VDSL

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by peterb View Post
    WRT to RAID0 my reasoning is that if each drive has an my mtbf of (say) 100 hours then the mtbf of 2 is 50 hours and as failure of either will probably result in the loss of data, the risk is doubled.

    However I will add that stats is not my strong subject so I’m open to having my logic challenged!
    I don't remember the maths being that easy, but two drives failing won't the the same as the chance of a single drive failing. Consider a RAID 0 of 10 drives, you just wouldn't, yet no number of drives gives you a certainty of failure. Not even with Seagates

  5. #20
    Member
    Join Date
    May 2009
    Location
    Where you are not
    Posts
    190
    Thanks
    13
    Thanked
    13 times in 9 posts
    • Iota's system
      • Motherboard:
      • GA-P67A-UD5-B3
      • CPU:
      • Intel Core i7 2600K
      • Memory:
      • 2 x BL2KIT25664FN2139
      • Storage:
      • 4 x CTFDDAC064MAG-1G1 (Raid 0)
      • Graphics card(s):
      • ASUS Radeon R9 290 DC-2
      • PSU:
      • Corsair Professional Series Gold AX750
      • Case:
      • Lian Li PC-X500B
      • Operating System:
      • Windows 10 Pro 64-bit
      • Monitor(s):
      • 2x Samsung 22" widescreen P2270 2ms DVI HD LCD TFT Ecofit
      • Internet:
      • 40Mbps SKY Fibre

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by peterb View Post
    WRT to RAID0 my reasoning is that if each drive has an my mtbf of (say) 100 hours then the mtbf of 2 is 50 hours and as failure of either will probably result in the loss of data, the risk is doubled.

    However I will add that stats is not my strong subject so I’m open to having my logic challenged!
    I've seen that argument (and the mathematics behind it), however logic dictates that if you have a single drive with an MTBF of 1.2 million hours, the MTBF of two single drives is still 1.2 million hours per drive and the MTBF isn't actually reduced when using together. However the chance of a drive failure has essentially increased by increasing the number of drives regardless of the MTBF.

    Personally, I'll just go with the MTBF that has been tested by the drive manufacturers, they'll have tested the average length of time before a drive fails and provided a warranty based on that the average. Bearing in mind the variables involved with either an HDD or SSD failing (such a silicon variances for example), I doubt we will ever have absolutes provided.

    I mean I could go buy lots of lottery tickets, while the chances of me winning have increased, it doesn't mean I'm going to. It also doesn't mean I'm not going to.

  6. #21
    Senior Member
    Join Date
    Jun 2005
    Posts
    856
    Thanks
    456
    Thanked
    69 times in 60 posts
    • Millennium's system
      • Motherboard:
      • Asus Z170 Pro Gamer ATX
      • CPU:
      • Intel i5 6600K @ 4.5GHz 4 core
      • Memory:
      • Corsair VPX 3000 DDR4 (16, 4*4)
      • Storage:
      • 500gb 850 Evo sata3 SSD, 2*2TB Green 5900 Raid 0
      • Graphics card(s):
      • MSI 390 8gb
      • PSU:
      • toughpower 1kw
      • Case:
      • Zalman Z3 Plus
      • Operating System:
      • Windows 10 64bit
      • Monitor(s):
      • VIEWSONIC VG2401MH 144hz (Solid)
      • Internet:
      • Origin ADSL Broadband, not really recommended.

    Re: Backblaze shares 2017 HDD reliability stats

    I'm splitting my RAID tonight. I don't need the extra speed and just ran it because of E-PEEN from the old Athlon 64 days when SSDs were not around. If I need speedy storage I'll just get another SSD in future. Not worth the risk.

    Backing up
    : n(baby):n(lover):n(sky)|>P(Name)>>not quite

    how do you spend your time online? (Hexus link)

  7. #22
    Technojunkie
    Join Date
    May 2004
    Location
    Up North
    Posts
    2,564
    Thanks
    233
    Thanked
    212 times in 137 posts

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by peterb View Post
    SSD don’t alter that - in fact they increase the need for backup because when they fail, they are (at present) more likely to fail catastrophically rather than giving early warning signs.
    Will data recovery companies do anything at all for failed SSDs ?

    I thought flash memory failures would make the disk go read only (I've had SD cards do that) - but every SSD failure I've seen has been a total death / non-detection (controller failure?)
    Chrome & Firefox addons for BBC News
    Follow me @twitter

  8. #23
    Senior Member
    Join Date
    May 2008
    Location
    London town
    Posts
    237
    Thanks
    6
    Thanked
    12 times in 9 posts

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by mikerr View Post
    Will data recovery companies do anything at all for failed SSDs ?
    Think some are, but it's hard to do data recovery on SSDs. The data is scattered across multiple flash chips - the 10% redundancy means random blocks scattered across the disks will be in the redundancy, and to add to the fun, 256bit on-disk encryption is pretty much standard with the encryption key buried somewhere in silicon in a place where it is meant to be unreceoverable.

    So yeah, backup is important. Good thing is stuff like Acronis runs fast on SSDs, so twice a day backups aren't an issue.

Page 2 of 2 FirstFirst 12

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •