DevHeads.net

// RESEND // 7.6: Software RAID1 fails the only meaningful test

the output files to a website to see)

The point of RAID1 is to allow for continued uptime in a failure scenario.
When I assemble servers with RAID1, I set up two HDDs to mirror each other,
and test by booting from each drive individually to verify that it works. For
the OS partitions, I use simple partitions and ext4 so it's as simple as
possible.

Using the CentOS 7.6 installer (v 1810) I cannot get this test to pass in any
way, with or without LVM. Using an older installer, it works fine (v 1611) and
I am able to boot from either drive but as soon as I do a yum update then it
fails.

I think this may be related or the same issue reported in "LVM failure after
CentOS 7.6 upgrade" since that also involves booting from a degraded RAID1
array.

This is a terrible bug.

See below for some (hopefully) useful output while in recovery mode after a
failed boot.

### output of fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000c1fd0

Device Boot Start End Blocks Id System
/dev/sda1 2048 629409791 314703872 fd Linux raid autodetect
/dev/sda2 * 629409792 839256063 104923136 fd Linux raid autodetect
/dev/sda3 839256064 944179199 52461568 fd Linux raid autodetect
/dev/sda4 944179200 976773119 16296960 5 Extended
/dev/sda5 944181248 975654911 15736832 fd Linux raid autodetect

### output of cat /prod/mdstat
Personalities :
md126 : inactive sda5[0](S)
15727616 blocks super 1.2

md127 : inactive sda2[0](S)
104856576 blocks super 1.2

unused devices: <none>

### content of rdosreport.txt
It's big; see
<a href="http://chico.benjamindsmith.com/rdsosreport.txt" title="http://chico.benjamindsmith.com/rdsosreport.txt">http://chico.benjamindsmith.com/rdsosreport.txt</a>