Setting up a Software RAID on CentOS and Fedora with mdadm

In this article I’ll explain how to setup a software RAID on Fedora. This will turn any number of attached USB drives into any flavour of RAID without the necessity of a dedicated enclosure. I’m using this setup for internal office data storage with redundancy, together with a Samba share.

This works on any Linux distribution, the only difference is the way to install the mdadm package. Here are the steps to make this work in principle:

  • format each drive and add a file system
  • using msdos as label and fd as file type
  • create a logical RAID partition using mdadm
  • mount the partition on a folder of your choice

Prerequisites

Install the mdadm package. I believe it has the same name on all distributions. The following command will work for Fedora and CentOS:

yum install mdadm

All following commands assume you have root privileges.

RAID Levels

While we’re on the subject, here’s a quick refresher of which RAID level does what (I tend to forget these things too easily):

  • RAID 0 = striped drives, no redundancy. One drive fails and all data is lost (aka The Kamikaze RAID)
  • RAID 1 = mirrored drives (usually two), one drive can fail without data loss
  • RAID 5 = at least 3 drives must be used, one drive can fail without causing data loss
  • RAID 6 = same as RAID 5, but two drives can fail without causing data loss. Usually used with a large number of drives

For my example I’m creating a RAID 5 with 3 drives. For testing I recommend a cheap hub and several USB drives. Their smaller size and price tag make this an ideal playground for testing and experimenting.

Examining your devices

To find out what your newly inserted device(s) are called, we can use fdisk:

fdisk -l 

On my system I can see that the new devices are as follows

  • /dev/sde
  • /dev/sdf
  • /dev/sdg

They may or may not have a partition on them already. Let’s format them next and prepare them for our RAID.

Formatting your devices

parted will take care of formatting. We need to give the command the device to format and a label for the partition table. We’re using msdos here, but other options are available. Note that this is not the file system. We’ll do that later. man parted has more details.

parted /dev/sde mklabel msdos

Creating new partitions

To build our RAID, we need to have one partition per drive, using the fd type. Again fdisk will be able to help us out, this time with a little wizard.

fdisk /dev/sde

This will prompt you for a command and give feedback. Here’s what to use step by step:

  • press n (create new partition)
  • press p (select a primary partition)
  • press Enter (default is partition number 1)
  • press Enter (default is First Sector)
  • press Enter (default is Last Sector)

The default partition type is Linux (ID 83). We need to change this to Linux autoraid (ID fd). You can verify this by pressing p (as in print), but that’s optional. To change the type:

  • press t (change partition type)
  • type fd (selects Linux autoraid)
  • type w (write all changes to disk and exit)

Repeat these steps with every device you’d like to participate in the RAID. Verify each partition with p while you’re still in fdisk, or type q to quit back to the shell. Make sure to write all changes to the partition table (final step above), otherwise our partitions won’t be recognised by the operating system.

Creating the RAID

Let’s use the mdadm command to setup our RAID now. We’ll create a new mountable device, for which the command needs some parameters:

  • what to do (create)
  • mount point of the new device (/dev/md1)
  • what type of RAID to create (known as level, such as raid5 or mirror)
  • how many devices partake in the RAID (raid-devices)
  • which devices partake in our adventure (partitions we’ve created above)

In my case the command looks like this:

mdadm --create /dev/md1 --level=raid5 --raid-devices=3 /dev/sde1 /dev/sdf1 /dev/sdg1

This will start our RAID, and you will see (or perhaps hear) your drives hard at work. There’s some initial setup procedure going on under the hood which will take some time, depending on the size and speed of your drives. You can check the status of the procedure with

mdadm --detail /dev/md1
...
Rebuild Status : 4% complete

Among a lot of other text, take a look at the Rebuild Status line. Every time you execute this command, the status will change and eventually complete. During this time your RAID is already usable. Also take note of the following in the command’s output:

State : clean, degraded, recovering  Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Large drives will take several hours to initialise. On a new RAID 5, the system will start as RAID 1 (mirror) and slowly create the parity data. When the process has finished syncing all drives, this will change to

State: clean

Adding a file system and mounting

At this point we have a usable device, but we can’t access it via our shell yet. We need to add a file system for this, and of course we need to mount it too. Since we’re on a Linux system, let’s use the EXT4 file system. mkfs will help us here:

mkfs.ext4 /dev/md4

This will take a moment. To mount our now usable filesystem, create or choose a mount point on your system. I might make a brand new one called storage and mount my RAID there:

mkdir /storage
mount /dev/md1 /storage

And that’s it – we’ve built ourselves a software RAID 😍

I’ll explain how to test, rebuild, add and remove disks in another article.

Further Reading

You can leave a comment on my original post.