ewams


Contents: Overview expanding a RAID6 volume with mdadm
Expanding a RAID6 volume can be a scary task. Expanding a RAID 6 logical volume that contains a large amount of data can be even more daunting. Thankfully expanding a RAID6 array with mdadm is not difficult and fairly straightforward. In this guide I will walk you through the entire process of expanding a RAID6 volume managed through mdadm by adding an additional disk.

The steps outlined below can cause data loss. Do not run them on a production system without fully understanding the process and testing in a development environment.


These instructions are not meant to be exhaustive and may not be appropriate for your environment. Always check with your hardware and software vendors for the appropriate steps to manage your infrastructure.


Formatting:
Instructions and information are detailed in black font with no decoration.
Code and shell text are in black font, gray background, and a dashed border.
Input is green.
Literal keys are enclosed in brackets such as [enter], [shift], and [ctrl+c].
Warnings are in red font.




Steps expand a RAID6 volume with mdadm
  1. Log in to your system and launch a local shell prompt.

  2. Change your shell to run as root.
    user@debian~$: su -
    Password:
    root@debian~$:

  3. Review the current status of the array.
    root@debian:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid6 sdc[0] sdb[6] sdh[5] sdg[4] sdf[2] sdd[1]
    11721060352 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    unused devices: <none>

    The array volume md0 has 6 disks, all are active, it is RAID6, and no unused devices.

  4. Review the detailed information of the array.
    root@debian:~# mdadm --detail /dev/md0
    /dev/md0:
    Raid Level : raid6
    Array Size : 11721060352 (11178.07 GiB 12002.37 GB)
    Raid Devices : 6
    Total Devices : 6
    Persistence : Superblock is persistent
    State : clean
    Number  Major  Minor  RaidDevice  State
           0       8       32              0              active sync       /dev/sdc
           1       8       48              1              active sync       /dev/sdd
           2       8       80              2              active sync       /dev/sdf
           4       8       96              3              active sync       /dev/sdg
           5       8       112            4              active sync       /dev/sdh
           6       8       16              5              active sync       /dev/sdb
    Note that some data was removed for clarity.

    Each disk has 3TB capacity for 12TB total capacity in the RAID5 array. The array appears to be clean and fully functional so it should be safe to move forward.

  5. Add an additional disk to the RAID6 array.
    root@debian:~# mdadm --add /dev/md0 /dev/sda
    mdadm: added /dev/sda
    Here mdadm is called, told it will be adding a disk, the target is the /dev/md0 array, and the disk is /dev/sda.

  6. Verify the disk is available to the array.
    root@debian:~# mdadm --detail /dev/md0
    /dev/md0:
    Raid Level : raid6
    Array Size : 11721060352 (11178.07 GiB 12002.37 GB)
    Raid Devices : 6
    Total Devices : 7
    Persistence : Superblock is persistent
    State : clean
    Number  Major  Minor  RaidDevice  State
           0       8       32              0              active sync       /dev/sdc
           1       8       48              1              active sync       /dev/sdd
           2       8       80              2              active sync       /dev/sdf
           4       8       96              3              active sync       /dev/sdg
           5       8       112            4              active sync       /dev/sdh
           6       8       16              5              active sync       /dev/sdb
           7       8       0              -                 spare               /dev/sda
    The disk /dev/sda is available to the array md0 and listed as a spare. The total number of devices is increased to 7 while the total RAID devices is still 6 and total capacity has remained the same.

  7. Expand the RAID6 array to include data on the new disk.
    root@debian:~# mdadm -v --grow --raid-devices=7 /dev/md0
    mdadm: Need to backup 10240K of critical section..
    This calls mdadm, tells it to be verbose, the action is to grow the array, the number of data devices is 7 and the target is /dev/md0.

    The grow command is used because the added disk is currently set as a spare drive, the disk should be used as a datadisk in the RAID6 array so we grow it.


  8. Check the status of the array.
    root@debian:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid6 sda[7] sdc[0] sdb[6] sdh[5] sdg[4] sdf[2] sdd[1]
    11721060352 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/7] [UUUUUUU]
    [>....................] reshape = 0.0% (370132/2930265088) finish=4118.1min speed=11857K/sec
    unused devices: <none>

    root@debian:~# mdadm --detail /dev/md0
    /dev/md0:
    Raid Level : raid6
    Array Size : 11721060352 (11178.07 GiB 12002.37 GB)
    Raid Devices : 7
    Total Devices : 7
    State : clean, reshaping
    Reshape Status : 0% complete
    Delta Devices : 1, (6->7)
    Number  Major  Minor  RaidDevice  State
           0       8       32              0              active sync       /dev/sdc
           1       8       48              1              active sync       /dev/sdd
           2       8       80              2              active sync       /dev/sdf
           4       8       96              3              active sync       /dev/sdg
           5       8       112            4              active sync       /dev/sdh
           6       8       16              5              active sync       /dev/sdb
           7       8       0              6              active sync               /dev/sda
    The array is reshaping the RAID6 level array. Notice that the number of RAID devices is now 7 but the total size of the array has remained 12TB because the new device is being added into the array. It also gives an estimation of the completion time (around 2.5 days).

  9. The next step is to wait until the rebuild is complete.

  10. Verify the status of the array.
    root@debian:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid6 sda[7] sdc[0] sdb[6] sdh[5] sdg[4] sdf[2] sdd[1]
    14651325440 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/7] [UUUUUUU]
    unused devices: <none>

    root@debian:~# mdadm --detail /dev/md0
    /dev/md0:
    Raid Level : raid6
    Array Size : 14651325440 (13972.59 GiB 15002.96 GB)
    Raid Devices : 7
    Total Devices : 7
    State : clean
    Number  Major  Minor  RaidDevice  State
           0       8       32              0              active sync       /dev/sdc
           1       8       48              1              active sync       /dev/sdd
           2       8       80              2              active sync       /dev/sdf
           4       8       96              3              active sync       /dev/sdg
           5       8       112            4              active sync       /dev/sdh
           6       8       16              5              active sync       /dev/sdb
           7       8       0                6              active sync       /dev/sda
    Array is clean and expanded, 15TB of storage available and all drives are online.



Conclusion
Expanding a RAID6 array is straightforward and performed with few commands. After expanding the array you can now nest array types, expand the partition and file system, or add more drives.

References
*disclaimer* This document is my own and does not represent anything from any other entity. I will not be held liable for anything bad that comes of it.

Written by Eric Wamsley
Posted: March 29th, 2014 5:46pm.
Topic: Expanding RAID6
Tags: mdadm, RAID6, expand, Debian, Linux, storage


 Eric Wamsley - ewams.net