Thursday, February 17, 2011

Dynamically Convert A Raid5 Array to Raid6

Transferred from my old blog:
If you have the newest mdadm tool, version 3.1.1 and above, it is now capable of changing the raid level of an array. If you cannot find a copy of the latest version for your distro you can compile from source via mdadm’s git repo, git://
The below assumes that you have 1 spare drive ready to add to your array (/dev/sdb) and that /dev/md0 is the raid5 array that you would like to move to raid6 and that you have 4 raid devices in /dev/md0 to start with.
Use the below commands:
mdadm --add /dev/md0 /dev/sdb
mdadm --grow --level=6 --raid-devices=5
Once this completes, you should have a fully functioning raid6 array. Enjoy your dual parity.
Further, you can also change the chunk size dynamically while you’re at it, the default chunk size of mdadm (which I believe they plan to up in future versions) is a paltry 64k, you’d be much better off with something in the 256-512k range. To change the chunk size of an array, use the following:
mdadm --grow /dev/md0 --chunk=512
I’ve seen several references now using –chunks-size, so it’s possible in future versions this may be the correct flag instead of –chunk, just something to be aware of. Also, upping your chunk size to 512 may not be possible depending what the total size of your array is. It’s possible that mdadm will spit out an error stating that the total array size is not divisible by 512, in which case you’ll have to settle for something smaller. (i.e. try 256 or 128).


  1. Here is the output from mdadm --detail on my NAS volume. All bays are in use, so I cant just add another drive. Can I still convert it to from raid5 to raid6? If so, please look at the command I've included after the paste and let me know if it looks correct:

    Version : 1.2
    Creation Time : Fri Sep 16 07:39:38 2011
    Raid Level : raid5
    Array Size : 27283091584 (26019.18 GiB 27937.89 GB)
    Used Dev Size : 1948792256 (1858.51 GiB 1995.56 GB)
    Raid Devices : 15
    Total Devices : 15
    Persistence : Superblock is persistent

    Update Time : Sat Nov 5 04:15:35 2011
    State : active
    Active Devices : 15
    Working Devices : 15
    Failed Devices : 0
    Spare Devices : 0

    Layout : left-symmetric
    Chunk Size : 64K

    Name : MediaCenter:2 (local to host MediaCenter)
    UUID : 96c94c5e:e3de5e5a:febb1a78:22639834
    Events : 1029375

    Number Major Minor RaidDevice State
    0 8 3 0 active sync /dev/hda3
    1 8 19 1 active sync /dev/hdb3
    2 8 35 2 active sync /dev/sdc3
    3 8 51 3 active sync /dev/hdd3
    4 8 67 4 active sync /dev/sde3
    5 131 99 5 active sync /dev/sdga3
    6 131 115 6 active sync /dev/sdgb3
    7 131 131 7 active sync /dev/sdgc3
    8 131 147 8 active sync /dev/sdgd3
    9 131 163 9 active sync /dev/sdge3
    10 133 3 10 active sync /dev/sdha3
    11 133 19 11 active sync /dev/sdhb3
    12 133 35 12 active sync /dev/sdhc3
    13 133 51 13 active sync /dev/sdhd3
    14 133 67 14 active sync /dev/sdhe3

    The command I THINK is right is this:

    mdadm --grow --level=6 --backup-fie=/volume2/backupfile --raid-devices=15 /dev/md2

    How large does the backup file need to be? /volume2 is 2tb... will that be enough given the size of my raid? Or, if that command is completely wrong, how would I convert my raid to raid 6?

  2. I have not had to work with level changing without adding drives, but I'm fairly certain that the backup-file does not need to be the size of the array, rather I think your safest bet would be to make sure that the place you're putting the backup file is as large as a single volume from your array. I've never done it, but I think you can up the raid level without an extra drive if you have enough free space (essentially you're shrinking the raid5 from 15 drives -> 14 drives and then stepping up the raid level to 6 -- so if your data would still fit in on a 14 drive raid5 array I think it's okay). That said, I stopped using raid a while ago and switched to Greyhole, so it's been a while. Buyer beware, YMMV, and all the rest. You can take a look at Neil Brown's blog for more information: