Mdadm recover degraded Array procedure

From Thomas-Krenn-Wiki
Jump to navigation Jump to search

This example illustrates how Linux Software RAID behaves during continued operation on a degraded array. A partition is deleted, the array is installed and the data is written. Finally the previously removed partition is added and analyzes how the data was recovered.

To obtain a degraded array, the RAID-device /dev/sdc is deleted using fdisk. Therefore, a partition in the RAID 1 array is missing and it goes into degraded status.

Performing a Reboot

The system starts in verbose mode and an indication is given that an array is degraded. Approval to start with a degraded array is necessary.

Boot-Hinweis

Output mdstat

  • [U_] means that the second RAID array disk is missing.
Every 1.0s: cat /proc/mdstat                            Thu Oct  3 09:15:21 2013

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra
id10]
md0 : active (auto-read-only) raid1 sdb1[2]
      2095040 blocks super 1.2 [2/1] [U_]

unused devices: <none>

Checkarray was not executed

Checkarray does not check the RAID array while in auto-read-only status:

root@ubuntumdraidtest:~# /usr/share/mdadm/checkarray -a /dev/md0
checkarray: W: array md0 in auto-read-only state, skipping...

mdadm Output

mdadm shows the status clean, degraded and reports that sdc1 does not exist.

root@ubuntumdraidtest:~# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Sep 30 14:59:51 2013
     Raid Level : raid1
     Array Size : 2095040 (2046.28 MiB 2145.32 MB)
  Used Dev Size : 2095040 (2046.28 MiB 2145.32 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu Oct  3 09:09:36 2013
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : ubuntumdraidtest:0  (local to host ubuntumdraidtest)
           UUID : ebe7bfba:a20bf402:f7954545:d920460f
         Events : 160

    Number   Major   Minor   RaidDevice State
       0       0        0        0      active sync   /dev/sdb1
       2       8       33        1      removed

Degraded Array description

In the following test, the degraded array is mounted and the data is written. When restoring the complete RAID, incl. the second hard disk, this data must be synchronized. root@ubuntumdraidtest:~# mount /dev/md0 /mnt root@ubuntumdraidtest:/mnt# dd if=/dev/zero of=/mnt/testfile bs=1M count=1000 oflag=dsync

nmon shows write activity via /dev/sdb

┌nmon─13g─────────────────────Hostname=ubuntumdraidtRefresh= 2secs ───16:01.20─┐
│ Disk I/O ──/proc/diskstats────mostly in KB/s─────Warning:contains duplicates─│
│DiskName Busy  Read WriteMB|0          |25         |50          |75       100|│
│sda        1%    0.0    0.0|>                                                |│
│sda1       0%    0.0    0.0|>                                                |│
│sda2       0%    0.0    0.0|>disk busy not available                         |│
│sda5       1%    0.0    0.0|>                                                |│
│sdb       90%    0.0   69.3|WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW>   |│
│sdb1      91%    0.0   69.3|WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW>   |│
│sdc        0%    0.0    0.0|>                                                |│
│sdd        0%    0.0    0.0|>                                                |│
│sdd1       0%    0.0    0.0|>                                                |│
│md0        0%    0.0   69.3|>disk busy not available                         |│
│dm-0       1%    0.0    0.0|>                                                |│
│dm-1       0%    0.0    0.0|>                                                |│
│Totals Read-MB/s=0.0      Writes-MB/s=207.8    Transfers/sec=1026.4           │
│──────────────────────────────────────────────────────────────────────────────│

Creating a partition using fdisk

To restore the RAID the partition that was deleted from the first point must be generated again:

root@ubuntumdraidtest:~# fdisk /dev/sdc

Create a new partition n and use the commmand t change the partition's system id, to modify the ID from fd to Linux raid autodetect.

Re-adding the partition to the array

root@ubuntumdraidtest:~# mdadm --manage /dev/md0 -a /dev/sdc1

The attached screenshot seen below illustrates the activity of the RAID software after the addition from /dev/sdc1. It will perform a recovery immediately Recovery.

Aktivität nach Hinzufügen von /dev/sdc1

The detailed output from mdadm shows that the array has the State cleaned and that both partitions are again active sync.

root@ubuntumdraidtest:~# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Sep 30 14:59:51 2013
     Raid Level : raid1
     Array Size : 2095040 (2046.28 MiB 2145.32 MB)
  Used Dev Size : 2095040 (2046.28 MiB 2145.32 MB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Oct  4 16:06:32 2013
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : ubuntumdraidtest:0  (local to host ubuntumdraidtest)
           UUID : ebe7bfba:a20bf402:f7954545:d920460f
         Events : 467

    Number   Major   Minor   RaidDevice State
       3       8       17        0      active sync   /dev/sdb1
       2       8       33        1      active sync   /dev/sdc1
Foto Thomas Niedermeier.jpg

Author: Thomas Niedermeier

Thomas Niedermeier working in the product management team at Thomas-Krenn, completed his bachelor's degree in business informatics at the Deggendorf University of Applied Sciences. Since 2013 Thomas is employed at Thomas-Krenn and takes care of OPNsense firewalls, the Thomas-Krenn-Wiki and firmware security updates.


Related articles

Mdadm checkarray function
Mdadm recovery and resync