RAID running out of time?

Discussion in 'NZ Computing' started by Lawrence D'Oliveiro, Aug 5, 2007.

  1. Interesting analysis <http://blogs.zdnet.com/storage/?p=162> of why, as your
    disks get larger, the chance of a RAID 5 array failing will reach certainty
    by about 2009, and even RAID 6 won't protect you for much longer.

    The problem is that, once a drive fails and the array tries to rebuild, the
    odds of hitting an unrecoverable read error (given disk sizes and current
    industry-standard accepted error rates) during the rebuild will be close to
    100%. And bang goes your whole array.
    Lawrence D'Oliveiro, Aug 5, 2007
    #1
    1. Advertising

  2. Lawrence D'Oliveiro

    RL Guest

    Lawrence D'Oliveiro wrote:
    > The problem is that, once a drive fails and the array tries to rebuild, the
    > odds of hitting an unrecoverable read error (given disk sizes and current
    > industry-standard accepted error rates) during the rebuild will be close to
    > 100%. And bang goes your whole array.


    RAID is not a substitute for backups, but restoring several terrabytes
    may not be fun.

    I am exploring using ZFS for my forhtcoming RAID deployment, because the
    built-in checksums will at least give an indication of any corruption
    that occurs. We can then restore from backup as appropriate.

    Recently I purchased a 500GB Western Digital SATA drive (AAKS), and
    managed to reliably reproduce a single-bit error by formatting the disk,
    writing data to the disk, and comparing it. The error was always in the
    same place within the byte. I haven't been so thorough checking the
    replacement unit, but it is a worry that random data corruption can
    occur so easily.

    RL
    RL, Aug 5, 2007
    #2
    1. Advertising

  3. Lawrence D'Oliveiro wrote:
    > Interesting analysis <http://blogs.zdnet.com/storage/?p=162> of why, as your
    > disks get larger, the chance of a RAID 5 array failing will reach certainty
    > by about 2009, and even RAID 6 won't protect you for much longer.
    >
    > The problem is that, once a drive fails and the array tries to rebuild, the
    > odds of hitting an unrecoverable read error (given disk sizes and current
    > industry-standard accepted error rates) during the rebuild will be close to
    > 100%. And bang goes your whole array.


    Interesting mathematics, I suppose that is why we are going to dual
    system arrays with disk packs, two individual disk arrays, backing each
    other up.
    collector«NZ, Aug 5, 2007
    #3
  4. Related to the above, here's <http://blogs.zdnet.com/storage/?p=169> a post
    referencing a guy's PhD thesis on the robustness of current filesystems.
    They all seem to be quite poor at recovering from errors--but so what else
    is new? I guess this just underlines that we have to move to something like
    ZFS before too long.
    Lawrence D'Oliveiro, Aug 9, 2007
    #4
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Mod
    Replies:
    0
    Views:
    733
  2. Replies:
    0
    Views:
    734
  3. SATA - Raid and Non Raid Question

    , Jan 10, 2007, in forum: Computer Support
    Replies:
    1
    Views:
    760
  4. =?Utf-8?B?VGhlb3JldGljYWxseQ==?=

    Does x64 require a SATA RAID Driver to install non-RAID SATA Drive

    =?Utf-8?B?VGhlb3JldGljYWxseQ==?=, Jul 15, 2005, in forum: Windows 64bit
    Replies:
    6
    Views:
    816
    Charlie Russel - MVP
    Jul 18, 2005
  5. Lawrence D'Oliveiro

    RAID 5 Down, RAID 6 To Go

    Lawrence D'Oliveiro, Feb 23, 2010, in forum: NZ Computing
    Replies:
    0
    Views:
    407
    Lawrence D'Oliveiro
    Feb 23, 2010
Loading...

Share This Page