Showing posts with label mdadm. Show all posts
Showing posts with label mdadm. Show all posts

Thursday, February 17, 2011

Dynamically Convert A Raid5 Array to Raid6

Transferred from my old blog:
If you have the newest mdadm tool, version 3.1.1 and above, it is now capable of changing the raid level of an array. If you cannot find a copy of the latest version for your distro you can compile from source via mdadm’s git repo, git://neil.brown.name/mdadm
The below assumes that you have 1 spare drive ready to add to your array (/dev/sdb) and that /dev/md0 is the raid5 array that you would like to move to raid6 and that you have 4 raid devices in /dev/md0 to start with.
Use the below commands:
mdadm --add /dev/md0 /dev/sdb
mdadm --grow --level=6 --raid-devices=5
Once this completes, you should have a fully functioning raid6 array. Enjoy your dual parity.
Further, you can also change the chunk size dynamically while you’re at it, the default chunk size of mdadm (which I believe they plan to up in future versions) is a paltry 64k, you’d be much better off with something in the 256-512k range. To change the chunk size of an array, use the following:
mdadm --grow /dev/md0 --chunk=512
I’ve seen several references now using –chunks-size, so it’s possible in future versions this may be the correct flag instead of –chunk, just something to be aware of. Also, upping your chunk size to 512 may not be possible depending what the total size of your array is. It’s possible that mdadm will spit out an error stating that the total array size is not divisible by 512, in which case you’ll have to settle for something smaller. (i.e. try 256 or 128).

Sunday, February 13, 2011

Restore drives that have been erroneously marked as failed in a Raid 5/6 array

Transferred from my old blog.
Scenario:

You have a raid5/6 array (/dev/md0) in which one or more drives have been marked as failed. For instance, I had a motherboard problem in my server recently which would cause my esata controller to spontaneously reset ports and knock 3-4 drives off my array at a time, putting the array in a failed state. All is lost, yes? No! Since in my scenario the array immediately fails upon having more than two drives disappear (this is a raid6 array), no data has changed on the actual file system, you can use the following command to force reassemble the drive. If possible, mdadm will up the event count on the “failed” drive(s) and clear the faulty flag for it. NOTE: Be careful with this, if a given drive was knocked out of your array before a modification to your array (i.e. writing to the array or an array reshape), this can cause massive, non-recoverable data corruption. Only do this if you are SURE that your array contents has not been changed/written to between the time these one or more drives were removed from your array and now.

If you have filled out mdadm.conf with your array and corresponding drives:

mdadm -Af /dev/md0

If you do not have mdadm.conf filled out and rely on mdadm auto assembling your array upon start up, use the following:

mdadm -Af /dev/md0

Where devices is replaced by the drives that make up your array, example:

mdadm -Af /dev/md0 /dev/sd[b-d] /dev/sd[h-o]

Followers