mdadm - RAID1 maintenance

Some notes regarding RAID maintenance using mdadm in RAID1 configuration.

Install mdadm: apt install mdadm

Hot tip: keep a LIST of md-devices and their respective connected partitions.
Upon disk disconnections and other malfunctions when things get messed up
it may be good to know what partiton that belongs where.

RAID-partitions are made up of one or more regular partitions marked as 
RAID volumes. You may create the partitions with ext4 file system or
other and then when making the RAID format these are RAID volumes.

When making up partitions leave some space (2 GB) at the end of the disk 
unpartitioned for spare sectors.

You need to update /etc/mdadm/mdadm.conf when changing the array as mdadm uses this upon boot.
To get a current config line you may do this:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Then open the file and edit the bottom lines - comment out the inactive ones above those newly added ones.

Replace <Value> positions with relevant values below.

Grow or shrink an array - to add or remove the amount of disks to an array:
mdadm /dev/md<X> --grow --raid-devices=<Y> --force
Where Y is the number och disks.

Remove a partition from an array, first it must be marked as failed, then it can be removed
mdadm --manage /dev/md<X> --fail /dev/sd<XY>
mdadm --manage /dev/md<X> --remove /dev/sd<XY>

If the drive is not connected or or has changed name and you get this when trying to remove:
mdadm: Cannot find /dev/s<XY>: No such file or directory

Then you can ask mdadm to remove all detached drives instead:
mdadm /dev/md<X> -r detached

To add a partition the disk must have an identical partition ready, easiest way is this dangerous way:
sfdisk -d /dev/sd<XY-SOURCE> | sfdisk /dev/sd<XY-TARGET>
Warning: this reads the partition table of sd<XY-SOURCE> and writes it to sd<XY-TARGET>.

An alternative way is to use cfdisk to find out the number of sectors for the source partition, and then make a new empty partition with the same sectors also using cfdisk. Sectors as specified using S for unit when specifying the size of the partition.

Gparted can only display sectors and does not allow to type in sectors, so it cannot be used to make new partitions by sector amount. But cfdisk can, 

You may copy the partition to the new disk and then format it as cleared with Gparted, but this does not work on a live system as the partitions are mounted and therefore locked.

Add a partition to an array:
mdadm --manage /dev/md<X> --add /dev/sd<XY>

It may be necessary to grow the array otherwise the new partition will be a spare (marked as (S)).

Find status of array:
cat /proc/mdstat

Stop array:
mdadm --stop /dev/md<X>

Failure recovery on disk removal or disconnection

After a disk has been disconnected and reconnected status may show "active (active-read-only)" as status. This can be fixed by doing:
mdadm --readwrite /dev/md<X>

Disks may be needed to be re-added:
mdadm --manage /dev/md<X> --add /dev/sd<XY>

NOTE: (active-read-only) seems to be a status for when a md device has not been written to.
So for a swap partition for example it may show active-read-only until it is used.
active-read-only should therefore not be considered a error completely.

update-grub results in errors and among them Couldn't find physical volume `(null)'

According to this thread this is because of a degraded array:
http://serverfault.com/questions/617552/grub-some-modules-may-be-missing-from-core-image-error

Check /proc/mdstat and retry with update-grub after fixing the degrade.

Rebuild speed

Can be controlled by editing a file:
echo 50000 > /proc/sys/dev/raid/speed_limit_min

More reading: http://jlmessenger.com/blog/linux-raid-mdadm-dos-donts

USB-disks fails to join array on boot: Found some drive for an array that is already active

This happened on Debian in VirtualBox when one drive was a virtualized disk and one connected via USB. This error is because of that mdadm does not wait for the USB disk to initialize.

To solve this you need to add rootdelay=XX to the kernel start line.

Edit /etc/default/grub, find this line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
Add rootdelay with a number representing seconds to wait, for me 15 worked:
GRUB_CMDLINE_LINUX_DEFAULT="quiet rootdelay=15"

You now need to update grub. But before that you need to add the drives that
are currently kicked out again, so they can sync and join the array. If you don't 
do this then grub will complain with buggy errors about "(null)".

So re-add your partitions to your md-devices:

mdadm /dev/mdX --add /dev/sdX

Follow /proc/mdstat and wait for it to complete, press CTRL +C when done:
watch -n1 "cat /proc/mdstat"

If everything went fine, update grub:
update-grub

It should not complain just do what it is told.

Then try to reboot:
reboot

Hopefully it should wait before trying to mount the root partition,
which should work if you have entered a root delay which is enough for your disks to
present themselves.

GRUB only one one drive

If you are using a software RAID like this for root device, then you may have grub installed.
But sadly it is by default only installed to one device only. So if this disk fails then you 
cannot boot the system.

To make grub reinstall to all devices you need to re-configure grub, this will allow you to install boot manager to all drives including the md device (which I guess is not leading anywhere?):
dpkg-reconfigure -plow grub-pc

update-initramfs -u

You may also do this to install grub to one drive:
grub-install /dev/sdX

Debian 8, degraded array and Gave up waiting for root device

Sadly it is not enough to force grub to install to every drive in Debian 8 to 
make it boot in a degraded state as of writing this, which is quite stupid 
as RAID1 should be used to make the system more stable against
disk failures. And it should definitely boot if a RAID1 setup has been reduced by one disk.

But in Debian 8 if you remove one drive in a RAID1 setup it may drop you off to a 
initramfs shell when you reboot with an error about giving up waiting for root device.

You may start the array anyway and boot from the initramfs shell
and continue to boot.

mdadm -S /dev/mxX
mdadm --assemble --scan --run /dev/mdX
exit

It may then start but this does not solve the problem.

The problem and the solution is found on this post:
http://serverfault.com/questions/688207/how-to-auto-start-degraded-software-raid1-under-debian-8-0-0-on-boot

It seems like that the version of mdadm that comes with Debian Jessie ignores the --scan and --run parameters in conjunction. This marks degraded arrays inactive that then then makes the root file system unable to mount. And therefore the system cannot boot.

Posts about older versions of maybe mdadm suggests adding bootdegraded=1 to the kernel startup line,
but this has no effect. What does however is the solution in the link above.

This is a bug introduced above version 7.8 of Debian:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784070

No read speed improvement with mdadm and 2 disks in RAID1


You may try hdparm -tT /dev/md<X> and find out that it nearly equals one of the single drives (hdparm -tT /dev/sd<X>).

This is explained here:
http://superuser.com/questions/385519/does-raid1-increase-performance-with-linux-mdadm

According to the answers there a speed improvement does not appear when reading from a single stream.

The speed improvement comes when reading from multiple streams at the same time. Simply put it seems as you got twice the heads you can read more files at the same time.


Warning! Shrinking and moving partitions with Gparted is not possible


As of Debian Jessie 8 it does not work completely to shrink mdadm RAID partitions. You can resize the inner file system stored in the md device and the md device itself (with mdadm --grow --size=...). But the outer linux-raid partition on the drive does not shrink.

Do also not expect that you can move linux-raid partitions as any other partition in Gparted, because you can't.
It does currently not support moving partitions of this type.

Your options if you like to move a partition forward or back on a disk is to resize with mdadm --grow --size=..., and then maybe create a new partition on another disk and syncing the data there, and then sync it back again. Does not feel so secure though.

Change log:
2019-06-22 21:37:56

This is a personal note. Last updated: 2022-11-28 02:15:26.



GitHub

My

GitLab

My

LinkedIn

My

Klebe.se

Don't forget to pay my friend a visit too. Joakim