Mdadm checkarray

From Thomas-Krenn-Wiki
Jump to: navigation, search

This article provides information about the checkarray script of Linux Software RAID tools mdadm and how it is run. Checkarray checks operations verified by the consistency of the RAID disks. In case of failure write operations are made ​​that may affect the performance of the RAID. Addtional information can be found here: /usr/share/doc/mdadm/README.checkarray.

Troubleshooting Checks

Checkarray verifies the consistency of RAID-disks with reading operations. It compares the corresponding blocks of each disk in the array. If an error occurs, the counter is increased in the /sys/block/mdX/md/mismatch_cnt file. The README states that in times of failure the checkarray reads as follows :[1]

If, however, while reading, a read error occurs, the check will trigger the normal response to read errors which is to generate the 'correct' data and try to write that out - so it is possible that a 'check' will trigger a write. However in the absence of read errors it is read-only.

This automatic repair function is also mentioned by Neil Brown in a mailing list post:[2]

All you need to do is get md/raid5 to try reading the bad block. Once it does that it will get a read error and automagically try to correct it. [...] 'check' (i.e. echo check > /sys/block/mdXX/md/sync_action) will cause md/raid5 to read all blocks on all devices, thus auto-repairing any unreadable blocks.

Check vs. Repair

As opposed to check a repair also includes a resync. The difference from Resync is, that no bitmap is used to optimize the process.[3]

Perform Automatic Check

The automatic check is performed on each Sunday of the month at 12:57 am. However, the execution is prevented when the day is not equal to or less than the seventh day of the month. Therefore the job is executed once a month on the first Sunday. In addition cron tries to perform the check in idle so as not to utilize too much of the system.

Summary:

By default, run at 12:57 am every Sunday, unless the day of the month is less than or equal to 7. Thus, only run on the first Sunday of each month. crontab(5) unfortunately sucks in this regard; therefore use this hack (see #380425).

Perform Manual Check

A manual check can be triggered via sysfs or the checkarray script (mdadm has no option for this yet[4]):

  1. via sysfs:
    • echo check > /sys/block/mdX/md/sync_action
  2. via the checkarray script:
    • /usr/share/mdadm/checkarray -a /dev/mdX

Visualization

During the check, the issue changes from
root@ubuntumdraidtest:~# cat /sys/block/md0/md/sync_action
idle

to

root@ubuntumdraidtest:~# cat /sys/block/md0/md/sync_action
check

cat /proc/mdstat delivers issues during the following checks:

Every 2.0s: cat /proc/mdstat                                                      Wed Sep 25 15:15:25 2013

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[0] sdc1[1]
      2095040 blocks super 1.2 [2/2] [UU]
      [======>..............]  check = 34.1% (716160/2095040) finish=0.0min speed=238720K/sec

unused devices: <none>

The corresponding dmesg abstract:

[12294.966072] md: data-check of RAID array md127
[12294.966074] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[12294.966075] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.
[12294.966077] md: using 128k window, over a total of 2095040k.
[12305.660042] md: md127: data-check done.

Pausing the Check

The following command

  • pauses the check and does not cancel
  • by using the following command checkarray -a /dev/mdX the check will continue
root@ubuntumdraidtest:~# /usr/share/mdadm/checkarray -x /dev/mdX

Rebuild40 Event

If there arise any read errors on a device during the check, the software RAID tries to reconstruct the data using data of other RAID devices, and writes the data to the device which has had the read error. In this case the syslog shows messages like RebuildNN (e.g. Rebuild25, Rebuild40, ..., Rebuild99). The two digits reflect the percentage, at which percent of the check the rebuild event has happened:[5]

adminuser@wc2:~$ tail -f /var/log/syslog
May  9 11:11:27 wc2 mdadm[1508]: Rebuild25 event detected on md device /dev/md/3
May  9 11:28:07 wc2 mdadm[1508]: Rebuild40 event detected on md device /dev/md/3
[...]
adminuser@wc2:~$ date && cat /proc/mdstat 
Fr 9. Mai 11:30:42 CEST 2014
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
[...]
md3 : active raid1 sdb5[1] sda5[0]
      454100403 blocks super 1.2 [2/2] [UU]
      [========>............]  check = 42.4% (192666240/454100403) finish=137.6min speed=31657K/sec
[...]
adminuser@wc2:~$ tail -f /var/log/syslog
May  9 12:01:27 wc2 mdadm[1508]: Rebuild69 event detected on md device /dev/md/3
May  9 12:18:07 wc2 mdadm[1508]: Rebuild82 event detected on md device /dev/md/3
[...]
wfischer@wc2:~$ date && cat /proc/mdstat 
Fr 9. Mai 12:18:18 CEST 2014
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
[...]
md3 : active raid1 sdb5[1] sda5[0]
      454100403 blocks super 1.2 [2/2] [UU]
      [================>....]  check = 82.4% (374380288/454100403) finish=306.8min speed=4330K/sec
[...]
adminuser@wc2:~$ tail -f /var/log/syslog
May  9 15:34:50 wc2 kernel: [19188.038973] ata1.00: exception Emask 0x0 SAct 0xe SErr 0x0 action 0x0
May  9 15:34:50 wc2 kernel: [19188.039346] ata1.00: irq_stat 0x40000008
May  9 15:34:50 wc2 kernel: [19188.039567] ata1.00: failed command: READ FPDMA QUEUED
May  9 15:34:50 wc2 kernel: [19188.039863] ata1.00: cmd 60/00:08:ba:99:0e/04:00:3e:00:00/40 tag 1 ncq 524288 in
May  9 15:34:50 wc2 kernel: [19188.039865]          res 41/40:00:bd:9a:0e/00:00:3e:00:00/40 Emask 0x409 (media error) <F>
May  9 15:34:50 wc2 kernel: [19188.040781] ata1.00: status: { DRDY ERR }
May  9 15:34:50 wc2 kernel: [19188.040999] ata1.00: error: { UNC }
May  9 15:34:50 wc2 kernel: [19188.045186] ata1.00: configured for UDMA/133
May  9 15:34:50 wc2 kernel: [19188.045202] ata1: EH complete
May  9 15:35:35 wc2 kernel: [19232.447063] md: md3: data-check done.
May  9 15:35:35 wc2 mdadm[1508]: RebuildFinished event detected on md device /dev/md/3, component device  mismatches found: 128 (on raid level 1)

Note: the ATA error UNC means Uncorrectable error (often due to bad sectors on the disk).[6]

Monitor mismatch_cnt with Icinga/Nagios

A simple script can be used to monitor the mismatch_cnt with Icinga/Nagios. A Warning/Critical is returned if a given threshold is exceeded.

$ sudo vi /usr/lib/nagios/plugins/check_linux_raid_mismatch
#!/bin/bash
#template from http://www.juliux.de/nagios-plugin-vorlage-bash
WARN_LIMIT=$1
CRIT_LIMIT=$2
if [ -z $WARN_LIMIT ] || [ -z $CRIT_LIMIT ];then
echo "Usage: check_linux_raid_mismatch WARNLIMIT CRITLIMIT"
exit 3;
else
DATA=0
for file in /sys/block/md*/md/mismatch_cnt
do
  cat $file | grep 0 > /dev/null
  if [ $? -ne 0 ]
  then
    DATA=`cat $file`
  fi
  MD_NAME=`echo $file | awk 'BEGIN { FS = "/" } ; { print $4 }'`
  PERF_DATA+="$MD_NAME=`cat $file` "
done
if [ $DATA -lt $WARN_LIMIT ]; then
echo "OK - all software raid mismatch_cnts are smaller than $WARN_LIMIT | $PERF_DATA"
exit 0;
fi
if [ $DATA -ge $WARN_LIMIT ] && [ $DATA -lt $CRIT_LIMIT ]; then
echo "WARNING - software raid mismatch_cnts are greater or equal than $WARN_LIMIT | $PERF_DATA"
exit 1;
fi
if [ $DATA -ge $CRIT_LIMIT ]; then
echo "CRITICAL - software raid mismatch_cnts are greater or equal than $CRIT_LIMIT | $PERF_DATA"
exit 2;
fi
fi

On the command line call the script:

$ /usr/lib/nagios/plugins/check_linux_raid_mismatch 1 10
OK - all software raid mismatch_cnts are smaller than 1 | md0=0 md1=0 md2=0 md3=0 md4=0

References

  1. checkarray README (/usr/share/doc/mdadm/README.checkarray)
  2. How to force rewrite of a smart detected bad block with raid5 (marc.info)
  3. md README (/usr/share/doc/mdadm/md.txt.gz)
  4. Re: Check/repair command? (linux-raid mailing list, Neil Brown, 31.03.2014)
  5. mdadm: Rebuild20 event detected on md device (Blog, 06.08.2007)
  6. Libata error messages - ATA error expansion (ATA Wiki)

Author: Thomas Niedermeier

Related articles

Mdadm recover degraded Array
Mdadm recovery and resync