Contents:
Overview of RAID5 and RAID6
RAID5 arrays are useful for pooling storage devices together and providing fault tolerance in the event of a drive or data failure. A storage array configured for RAID5 can tolerate the loss of 1 disk or volume without losing any data. Using maths, a RAID5 array that has experienced a failure from one volume can be rebuilt using the remaining volumes. Super awesome for when a drive fails, you will not lose your data!
Until now... hard drives have massive storage capacities, 4TB drives are available to consumers as of May 2013. If a RAID5 array is comprised of large capacity disks, it will take longer to rebuild the array in the event of a failed volume. During the rebuild process the entire array is at risk and can not support the loss of another volume. As the rebuild process is very intensive on the volumes that it is made up of, the risk of failure of another disk is generally higher during the rebuild process. Enter RAID6.
RAID6 is the new industry standard for storage arrays that are comprised of high capacity disks. An array that is built as RAID6 can support the failure of 2 devices without data loss. That fault tolerance level increases the probability of a successful rebuild in the event of a lost volume.
The increased fault tolerance of RAID6 is made possible by the use of two parity blocks per stripe, where as RAID5 only has one parity block per stripe. This means that while a RAID5 array's storage space can be calculated by ((number of disks - 1) * disk size), RAID6 is calculated by ((number of disks -2) * disk size). Reworded: with RAID6 you lose two disks' worth of capacity while with RAID5 you only lose one disk's capacity.
The decision to move from RAID5 to RAID6 is convincing once you have a high capacity RAID5 array that experiences a volume failure and you are sweating the array rebuild for 4 or more days. Which is what happened in my case so I decided to move to RAID6 for extra protection. Here is how to convert a RAID5 array to RAID6 using mdadm. Keep in mind that because RAID6 would have less total capacity than a RAID5 array with the same number of volumes, a new disk must be added to the array before it can be converted if you wish to preserve the data.
The steps outlined below can cause data loss. Do not run them on a production system without fully understanding the process and testing in a development environment.
These instructions are not meant to be exhaustive and may not be appropriate for your environment. Always check with your hardware and software vendors for the appropriate steps to manage your infrastructure.
Formatting:
Instructions and information are detailed in black font with no decoration.
Code and shell text are in black font, gray background, and a dashed border.
Input is green.
Literal keys are enclosed in brackets such as [enter], [shift], and [ctrl+c].
Warnings are in red font.
Steps to Convert RAID5 to RAID6
- Log in to your system and launch a local shell prompt.
- Change your shell to run as root.
user@debian~$: su -
Password:
root@debian~$:
- Review the current status of the array.
root@debian:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[0] sdg[5] sdf[4] sde[2] sdc[1]
11721060352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
unused devices: <none>
The array volume md0 has 5 disks, all are active, it is RAID5, and no unused devices.
- Review the detailed information of the array.
root@debian:~# mdadm --detail /dev/md0
/dev/md0:
Raid Level : raid5
Array Size : 11721060352 (11178.07 GiB 12002.37 GB)
Raid Devices : 5
Total Devices : 5
State : clean
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 64 2 active sync /dev/sde
4 8 80 3 active sync /dev/sdf
5 8 96 4 active sync /dev/sdg
Note that some data was removed for clarity.
Each disk has 3TB capacity for 12TB total capacity in the RAID5 array. The array appears to be clean and fully functional.
- Add an additional disk to the RAID5 array.
root@debian:~# mdadm --add /dev/md0 /dev/sda
mdadm: added /dev/sda
Here mdadm is called, told it will be adding a disk, the target is the /dev/md0 array, and the disk is /dev/sda.
- Verify the disk is available to the array.
root@debian:~# mdadm --detail /dev/md0
/dev/md0:
Raid Level : raid5
Array Size : 11721060352 (11178.07 GiB 12002.37 GB)
Raid Devices : 5
Total Devices : 6
State : clean
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 64 2 active sync /dev/sde
4 8 80 3 active sync /dev/sdf
5 8 96 4 active sync /dev/sdg
6 8 0 - spare /dev/sda
The disk /dev/sda is available to the array md0 and listed as a spare. The total number of devices is increased to 6 while the total RAID devices is still 5 and total capcity has remained the same.
- Configure the RAID5 array to be a RAID6.
root@debian:~# mdadm --grow /dev/md0 --level=6 --raid-devices=6 --backup-file=/root/raid5backup
mdadm level of /dev/md0 changed to raid6
This calls mdadm, tells it to grow the array, the target is /dev/md0, the new RAID level is 6, the total devices is 6, and to backup the array configuration to /root/raid5backup.
The grow command is used because the total number of data disks is increasing from 5 to 6.
6 as the number of devices is just a coincidence. 6 disks are not required for RAID6, the minimum number is 4.
- Check the status of the array.
root@debian:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sda[6] sdb[0] sdg[5] sdf[4] sde[2] sdc[1]
11721060352 blocks super 1.2 level 6, 512k chunk, algorithm 18 [6/5] [UUUUU_]
[>....................] reshape = 0.0% (38912/2930265088) finish=11290.9min speed=4323K/sec
unused devices: <none>
root@debian:~# mdadm --detail /dev/md0
/dev/md0:
Raid Level : raid6
Array Size : 11721060352 (11178.07 GiB 12002.37 GB)
Used Dev Size : 2930265088 (2794.52 GiB 3000.59 GB)
Raid Devices : 6
Total Devices : 6
State : clean, degraded, recovering
Active Devices : 5
Working Devices : 6
Spare Devices : 1
Reshape Status : 0% complete
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 64 2 active sync /dev/sde
4 8 80 3 active sync /dev/sdf
5 8 96 4 active sync /dev/sdg
6 8 0 5 spare rebuilding /dev/sda
The array is rebuilding to a RAID6 level array. Notice that the number of RAID devices is now 6, RAID level is 6. It also gives an estimation of the completion time (around 7 days).
- The next step is to wait until the rebuild is complete.
- Verify the status of the array.
root@textbox:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sda[6] sdb[0] sdg[5] sdf[4] sde[2] sdc[1]
11721060352 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
unused devices: <none>
root@textbox:~# mdadm --detail /dev/md0
/dev/md0:
Raid Level : raid6
Array Size : 11721060352 (11178.07 GiB 12002.37 GB)
Used Dev Size : 2930265088 (2794.52 GiB 3000.59 GB)
Raid Devices : 6
Total Devices : 6
State : clean
Active Devices : 6
Working Devices : 6
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 64 2 active sync /dev/sde
4 8 80 3 active sync /dev/sdf
5 8 96 4 active sync /dev/sdg
6 8 0 5 active sync /dev/sda
Array is clean and rebuilt.
Conclusion
RAID6 provides improved fault tolerance over RAID5 thus lowering the risk of data loss. By using the steps above, a RAID5 array can be converted into a RAID6 array with mdadm. Converting a RAID5 into RAID6 makes your data safer and the array can be one less thing on your mind.
References
*disclaimer*
This document is my own and does not represent anything from any other entity. I will not be held liable for anything bad that comes of it.
Written by Eric Wamsley
Posted: Posted: May 2nd, 2013 9:38pm