In my previous article I’ve explained how to setup a software RAID with mdadm. This was a RAID 5 setup with 3 drives, and in this article we’ll take a look how to expand it with a 4th drive, increasing its overall size. Here’s what we’ll do in principle:
- format the new drive
- create an fd partition on it (Linux auto raid type)
- add the partition as a spare
- grow the array
- resize the file system
Getting the new drive ready
Let’s format our new drive first. Let’s assume it’s /dev/sdc for now (you can find out with fdisk -l), so we’ll format it first:
parted /dev/sdc mklabel msdos
Now we’ll create a new partition:
fdisk /dev/sdc
Make sure the new partition is of type fd. Follow the steps from my previous article if you need more details. If all goes well, we’ll have a new usable partition called /dev/sdc1.
Increasing the RAID’s internal rebuild speed
Before we move on to our array, let’s increase its internal speed so that the rebuild takes a little less time in our next step by increasing the stripe cache size. This is a wonderful tip by Tobias Hofmann that reduced the wait time I had on my test setup considerably. Thanks for sharing, Tobias!
echo 32768 > /sys/block/md1/md/stripe_cache_size
My RAID is /dev/md1, change this accordingly for your own (md0, md2, etc).
Another tip from Zack Reed suggests we can also increase the RAID’s speed limits. This should work for every RAID on your system, regardless of its designation:
echo 50000 > /proc/sys/dev/raid/speed_limit_min echo 200000 > /proc/sys/dev/raid/speed_limit_max
Adding our partition as a hot spare
It’s easy to add new unused partitions to our RAID. These will show up as spares, and in case one of our main drives goes down, mdadm will immediately start rebuilding a spare. Here’s how we add our new partition:
mdadm --add /dev/md1 /dev/sdc1
We can examine the state of affairs across all arrays by looking at /proc/mdstat.
cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid5 sdc1[4](S) sdg1[3] sdf1[2] sde1[0]
See the little (S) next to our new drive? This means mdadm is seeing this as a partition that can be used should another one go down, for immediate re-population and integration. I’ve tested this by pulling one of my drives out, it’s quite exciting to see how well it works
Growing the size of our RAID
We can ask mdadm to turn the above spare into a participating member of the current setup. This requires a bit of re-shuffling under the hood so the parity data and usable blocks are distributed evenly across all drives. Here’s how to do it:
mdadm --grow /dev/md1 --raid-devices=4
This syntax is similar to the one we used when initially building the array. mdadm will go to work immediately, but it will still take a considerable amount of time (even with the above speed tweaks). To keep an eye on it, call cat /proc/mdstat again from time to time, or use watch to have a continuous update in your terminal window.
watch cat /proc/mdstat
This will give you an approximate time to finish. Let it run its course before proceeding with the next step.
Resizing the file system
While our array is now larger, the ext4 file system I’m using doesn’t know about the new size yet. We need to extend it so that the additional space becomes usable. In a real life environment with important data we should probably unmount the RAID first:
umount /raid
and also run a file integrity check:
e2fsck -f /dev/md1
However, since I’m just playing around with a bunch of USB sticks, I won’t do that and simply crack on with resizing the file system (Linux kernels since 2.6 can do this while the partition is still mounted). Here’s how to do that:
resize2fs /dev/md1
This will take some time. When finished, take a look at your current setup with df -h and see your storage space on the RAID has increased. Hurra!