Page 2 of 2 FirstFirst 12
Results 17 to 23 of 23

Thread: Backblaze shares 2017 HDD reliability stats

  1. #17
    Senior Member
    Join Date
    May 2009
    Location
    Where you are not
    Posts
    1,331
    Thanks
    611
    Thanked
    103 times in 90 posts
    • Iota's system
      • Motherboard:
      • Asus Maximus Hero XI
      • CPU:
      • Intel Core i9 9900KF
      • Memory:
      • CMD32GX4M2C3200C16
      • Storage:
      • 1 x 1TB / 3 x 2TB Samsung 970 Evo Plus NVMe
      • Graphics card(s):
      • Nvidia RTX 3090 Founders Edition
      • PSU:
      • Corsair HX1200i
      • Case:
      • Corsair Obsidian 500D
      • Operating System:
      • Windows 10 Pro 64-bit
      • Monitor(s):
      • Samsung Odyssey G9
      • Internet:
      • 500Mbps BT FTTH

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by peterb View Post
    RAID is about resilience and maintaining uptime allowing a failing drive to be swapped out with minimal service interruption (the exception is RAID0 which just doubles the risk of data loss as failure of one drive can affect the data on both). They are not backup substitutes.

    SSD don’t alter that - in fact they increase the need for backup because when they fail, they are (at present) more likely to fail catastrophically rather than giving early warning signs.
    Oh don't get me wrong, I'm well aware of the reasoning for RAID for resilience and maintaining uptime, especially in a database use scenario amongst other uses. Equally it definitely isn't a substitute for regular backups as you pointed out. As for RAID 0 increasing the risk of data loss, that's something I'd challenge (regardless of the mathematics), a single drive dying is just as likely as a single drive working alongside another dying.

    Someone linked this recently - https://techreport.com/review/26523/...-to-a-petabyte, although SSD may fail catastrophically, of those tested, they all surpassed their endurance specifications with ease, SMART reports the status of the drive (which can be easily be read in programs like HWInfo.). As long as you're not expecting to keep going past the endurance specs, I wouldn't imagine they're any less reliable than spinning plates of rust (in my albeit anecdotal experience so far, they're more reliable).

  2. Received thanks from:

    Millennium (07-02-2018)

  3. #18
    The late but legendary peterb - Onward and Upward peterb's Avatar
    Join Date
    Aug 2005
    Location
    Looking down & checking on swearing
    Posts
    19,378
    Thanks
    2,892
    Thanked
    3,403 times in 2,693 posts

    Re: Backblaze shares 2017 HDD reliability stats

    WRT to RAID0 my reasoning is that if each drive has an my mtbf of (say) 100 hours then the mtbf of 2 is 50 hours and as failure of either will probably result in the loss of data, the risk is doubled.

    However I will add that stats is not my strong subject so I’m open to having my logic challenged!
    (\__/)
    (='.'=)
    (")_(")

    Been helped or just 'Like' a post? Use the Thanks button!
    My broadband speed - 750 Meganibbles/minute

  4. #19
    root Member DanceswithUnix's Avatar
    Join Date
    Jan 2006
    Location
    In the middle of a core dump
    Posts
    13,010
    Thanks
    781
    Thanked
    1,568 times in 1,325 posts
    • DanceswithUnix's system
      • Motherboard:
      • Asus X470-PRO
      • CPU:
      • 5900X
      • Memory:
      • 32GB 3200MHz ECC
      • Storage:
      • 2TB Linux, 2TB Games (Win 10)
      • Graphics card(s):
      • Asus Strix RX Vega 56
      • PSU:
      • 650W Corsair TX
      • Case:
      • Antec 300
      • Operating System:
      • Fedora 39 + Win 10 Pro 64 (yuk)
      • Monitor(s):
      • Benq XL2730Z 1440p + Iiyama 27" 1440p
      • Internet:
      • Zen 900Mb/900Mb (CityFibre FttP)

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by peterb View Post
    WRT to RAID0 my reasoning is that if each drive has an my mtbf of (say) 100 hours then the mtbf of 2 is 50 hours and as failure of either will probably result in the loss of data, the risk is doubled.

    However I will add that stats is not my strong subject so I’m open to having my logic challenged!
    I don't remember the maths being that easy, but two drives failing won't the the same as the chance of a single drive failing. Consider a RAID 0 of 10 drives, you just wouldn't, yet no number of drives gives you a certainty of failure. Not even with Seagates

  5. #20
    Senior Member
    Join Date
    May 2009
    Location
    Where you are not
    Posts
    1,331
    Thanks
    611
    Thanked
    103 times in 90 posts
    • Iota's system
      • Motherboard:
      • Asus Maximus Hero XI
      • CPU:
      • Intel Core i9 9900KF
      • Memory:
      • CMD32GX4M2C3200C16
      • Storage:
      • 1 x 1TB / 3 x 2TB Samsung 970 Evo Plus NVMe
      • Graphics card(s):
      • Nvidia RTX 3090 Founders Edition
      • PSU:
      • Corsair HX1200i
      • Case:
      • Corsair Obsidian 500D
      • Operating System:
      • Windows 10 Pro 64-bit
      • Monitor(s):
      • Samsung Odyssey G9
      • Internet:
      • 500Mbps BT FTTH

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by peterb View Post
    WRT to RAID0 my reasoning is that if each drive has an my mtbf of (say) 100 hours then the mtbf of 2 is 50 hours and as failure of either will probably result in the loss of data, the risk is doubled.

    However I will add that stats is not my strong subject so I’m open to having my logic challenged!
    I've seen that argument (and the mathematics behind it), however logic dictates that if you have a single drive with an MTBF of 1.2 million hours, the MTBF of two single drives is still 1.2 million hours per drive and the MTBF isn't actually reduced when using together. However the chance of a drive failure has essentially increased by increasing the number of drives regardless of the MTBF.

    Personally, I'll just go with the MTBF that has been tested by the drive manufacturers, they'll have tested the average length of time before a drive fails and provided a warranty based on that the average. Bearing in mind the variables involved with either an HDD or SSD failing (such a silicon variances for example), I doubt we will ever have absolutes provided.

    I mean I could go buy lots of lottery tickets, while the chances of me winning have increased, it doesn't mean I'm going to. It also doesn't mean I'm not going to.

  6. #21
    don't stock motherhoods
    Join Date
    Jun 2005
    Posts
    1,298
    Thanks
    807
    Thanked
    125 times in 108 posts
    • Millennium's system
      • Motherboard:
      • MSI X470 Gaming Plus
      • CPU:
      • AMD 3600x @ 3.85 with Turbo
      • Memory:
      • 4*G-Skill Samsung B 3200 14T 1T
      • Storage:
      • WD850 and OEM961 1TB, 1.5TB SSD SATA, 4TB Storage, Ext.
      • Graphics card(s):
      • 3070 FE HHR NVidia (Mining Over)
      • PSU:
      • ToughPouwer 1kw (thinking of an upgrade to 600w)
      • Case:
      • Fractal Design Define S
      • Operating System:
      • Windows 101 Home 64bit
      • Monitor(s):
      • HiSense 55" TV 4k 8bit BT709 18:10
      • Internet:
      • Vodafone 12 / month, high contentions weekends 2, phone backup.

    Re: Backblaze shares 2017 HDD reliability stats

    I'm splitting my RAID tonight. I don't need the extra speed and just ran it because of E-PEEN from the old Athlon 64 days when SSDs were not around. If I need speedy storage I'll just get another SSD in future. Not worth the risk.

    Backing up
    hexus trust : n(baby):n(lover):n(sky)|>P(Name)>>nopes

    Be Careful on the Internet! I ran and tackled a drive by mining attack today. It's not designed to do anything than provide fake texts (say!)

  7. #22
    Technojunkie
    Join Date
    May 2004
    Location
    Up North
    Posts
    2,580
    Thanks
    239
    Thanked
    213 times in 138 posts

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by peterb View Post
    SSD don’t alter that - in fact they increase the need for backup because when they fail, they are (at present) more likely to fail catastrophically rather than giving early warning signs.
    Will data recovery companies do anything at all for failed SSDs ?

    I thought flash memory failures would make the disk go read only (I've had SD cards do that) - but every SSD failure I've seen has been a total death / non-detection (controller failure?)
    Chrome & Firefox addons for BBC News
    Follow me @twitter

  8. #23
    Senior Member
    Join Date
    May 2008
    Location
    London town
    Posts
    427
    Thanks
    8
    Thanked
    21 times in 16 posts

    Re: Backblaze shares 2017 HDD reliability stats

    Quote Originally Posted by mikerr View Post
    Will data recovery companies do anything at all for failed SSDs ?
    Think some are, but it's hard to do data recovery on SSDs. The data is scattered across multiple flash chips - the 10% redundancy means random blocks scattered across the disks will be in the redundancy, and to add to the fun, 256bit on-disk encryption is pretty much standard with the encryption key buried somewhere in silicon in a place where it is meant to be unreceoverable.

    So yeah, backup is important. Good thing is stuff like Acronis runs fast on SSDs, so twice a day backups aren't an issue.

Page 2 of 2 FirstFirst 12

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •