Help finishing off Centos 7 RAID install

I've just finished installing a new Bacula storeage server. Prior to doing the
install I did some research and ended up deciding to do the following

6x4TB drives
/boot/efi efi_fs sda1
/boot/efi_copy efi_fs sdb1
/boot xfs RAID1 sda2 sdb2
VG RAID6 all drives containing


1) The big problem with this is that it is dependant on sda for booting. I
did find an aritcle on how to set up boot loading on multiple HDD's,
including cloning /boot/efi but I now can't find it. Does anyone know of a
similar article?

2) is putting SWAP in a RAID a good idea? Will it help, will it cause


Re: Help finishing off Centos 7 RAID install

By Gary Stainburn at 01/11/2019 - 07:22

I completed the install as described below which went well.

I then copied the contents of /boot/efi to /boot/efi_copy using

cd /boot
rsync -acr efi/ efi_copy

I then installed grub2 on sdb

yum install yum install grub2-efi-modules
grub2-install /dev/sdb

everything ran as expected with no errors. The instructions then said to test
the install by powering off and swapping sda and sdb. Instead, I just
unplugged sda.

When I booted the system loaded straight into a grub> prompt.

I powered off and reconnected sda hoping to be able to continue experimenting.
However, even with sda reconnected it just boots into a grub> prompt.

Anyone got suggestions what to do next?

(This server is not live, so can be wiped / experimented on)

On Wednesday 09 January 2019 10:30:54 Gary Stainburn wrote:

Re: Help finishing off Centos 7 RAID install

By Gordon Messmer at 01/11/2019 - 19:39

On 1/11/19 3:22 AM, Gary Stainburn wrote:

If you want to manually provide redundant EFI system partitions (no
RAID1), all you need to do is run efibootmgr to add a second entry.  You
don't need to install grub2-efi-modules or run grub2-install.  Use:

efibootmgr -c -w -L 'CentOS Linux' -d /dev/sdb -p 1 -l 'EFI\centos\shim.efi'

... where the option to -d is the name of the drive containing the EFI
system partition and the option to -p is the EFI system partition number.

For reference, that command will appear in /var/log/anaconda/program.log
after any normal installation.

Re: Help finishing off Centos 7 RAID install

By Johnny Hughes v... at 01/11/2019 - 11:16

If you boot, istn't there a way to get into an EFI boot menu where you can
see the different EFI boot entries? Maybe you can see there what is wrong

On the other hand, maybe you could even reinstall and try to configure
/boot/efi on RAID1 as it was described by others in this thread?


Re: Help finishing off Centos 7 RAID install

By Gary Stainburn at 01/11/2019 - 11:27

On Friday 11 January 2019 15:16:12 Simon Matter via CentOS wrote:
I have gone down this route, and re-installed. This time I did do the EFI
partition as RAID1 as people have suggested and this time the installer let
me select what I wanted without warnings.

We'll see what happens after the install completes, which unfortunately will
probably be after I finish tonight.

Re: Help finishing off Centos 7 RAID install

By Gordon Messmer at 01/09/2019 - 16:02

On 1/9/19 2:30 AM, Gary Stainburn wrote:

Use RAID1 for /boot/efi as well.  The installer should get the details

It'll be moderately more reliable.  If you have swap on a non-redundant
disk and the kernel tries to read it, bad things (TM) will happen.

The only "drawback" that I'm aware of is that RAID consistency checks
become meaningless, because it's common for swap writes to be canceled
before complete, in which case one disk will have the page written but
the other won't.  This is by design, and considered the optimal
operation.  However, consistency checks don't exclude blocks used for
swap, and they'll typically show mismatched blocks.

Re: Help finishing off Centos 7 RAID install

By Johnny Hughes v... at 01/10/2019 - 12:35

Are you sure? How is the EFI firmware going to know about the RAID1?

RAID1 is going to have type FD00 (Linux RAID) whereas EFI firmware expects
type EF00 (EFI System Partition) to boot from.

EFI then reads the GPT table directly and looks for a vfat filesystem to
boot from. Default Linux software RAID1 on EL7 uses metadata 1.2 which is
located at the beginning of the partition. EFI won't recognize the vfat
filesystem behind the RAID metadata.

Maybe certain EFI firmware is more tolerant but at least in my case I
didn't get it to work on RAID1 at all.

I'd really be interested if someone got it to work, how exactly it's
configured. How exactly do the GPT tables look, how exactly is the RAID1


Re: Help finishing off Centos 7 RAID install

By Gordon Messmer at 01/10/2019 - 14:01

On 1/10/19 8:35 AM, Simon Matter via CentOS wrote:


It doesn't specifically.  Anaconda will create two EFI boot entries,
each referring to one of the mirror components:

# efibootmgr -v
BootCurrent: 0001
Timeout: 1 seconds
BootOrder: 0001,0000
Boot0000* CentOS Linux
Boot0001* CentOS Linux
# blkid -p  -o value -s PART_ENTRY_UUID /dev/sda1
# blkid -p  -o value -s PART_ENTRY_UUID /dev/sdb1

I think you're referring to MBR partition types.  I'm not certain, but I
don't see those values in a GPT.

Anaconda knows that it needs to use 1.0 metadata for EFI system
partitions, which is what I mean when I said "the installer should get
the details right."

# df /boot/efi/
Filesystem     1K-blocks  Used Available Use% Mounted on
/dev/md125        194284 11300    182984   6% /boot/efi
# mdadm --detail /dev/md125
           Version : 1.0

It might be more useful to know if this is still a problem for you with
the current release.  I'm not sure when support for this was added to
Anaconda, but it was fairly recent.  You may have last tried this before
that release.

Re: Help finishing off Centos 7 RAID install

By Johnny Hughes v... at 01/10/2019 - 14:20

So far my config looks similar.

No, it's GPT. What does "gdisk -l /dev/sda" show? That would be
interesting to see.

OK I see. Do you also have /boot mounted on its own MD device?

IIRC the initial install was with 7.5.


Re: Help finishing off Centos 7 RAID install

By Gordon Messmer at 01/10/2019 - 17:28

On 1/10/19 10:20 AM, Simon Matter via CentOS wrote:

If you're looking for the "code" column, they're all 0700.  See the man
page for gdisk.  The section describing the "l" command clarifies:

              For ease
              of data entry, gdisk compresses these into two-byte
              hexadecimal) values that are related to  their
equivalent  MBR

              ... Note that these two-byte codes are unique to gdisk.

The codes printed by gdisk don't actually appear in the GPT.


Re: Help finishing off Centos 7 RAID install

By Stephen John Smoogen at 01/10/2019 - 13:58

On Thu, 10 Jan 2019 at 11:35, Simon Matter via CentOS < ... at centos dot org>

part raid.300 --fstype="mdmember" --ondisk=sda --size=477
part raid.310 --fstype="mdmember" --ondisk=sdb --size=477
part raid.320 --fstype="mdmember" --ondisk=sdc --size=477
part raid.330 --fstype="mdmember" --ondisk=sdd --size=477
part raid.340 --fstype="mdmember" --ondisk=sde --size=477
part raid.350 --fstype="mdmember" --ondisk=sdf --size=477
part raid.360 --fstype="mdmember" --ondisk=sdg --size=477
part raid.370 --fstype="mdmember" --ondisk=sdh --size=477

raid /boot/efi --device=1 --fstype="efi" --level=RAID1
--fsoptions="umask=0077,shortname=winnt" raid.300 raid.310 raid.320
raid.330 raid.340 raid.350 raid.360 raid.370

That said, I don't know what the low level items for it are or if this is
universal. From dealing with various EFI chipsets.. I am guessing the
standard is rather loose in what is EFI compliant :).

Re: Help finishing off Centos 7 RAID install

By Keith Keller at 01/09/2019 - 23:13

On 2019-01-09, Gordon Messmer <gordon. ... at gmail dot com> wrote:
If the swap is RAID1 on its own partitions (e.g., sda5/sdb5), then
CHECK_DEVS in /etc/sysconfig/raid-check can be configured to check
only specific devices.


Re: Help finishing off Centos 7 RAID install

By Johnny Hughes v... at 01/09/2019 - 12:42

I also spent (wasted?) quite some time on this issue because I couldn't
believe things don't work so nice with EFI as they did before. The
designers of EFI obviously forgot that some people might want to boot from
software RAID in a redundant way.

I ended up with a similar design than you, my fstab has this:
/dev/md0 /boot xfs defaults 0 0
/dev/nvme0n1p1 /boot/efi vfat
umask=0077,shortname=winnt 0 0
/dev/nvme1n1p1 /boot/efi.backup vfat
umask=0077,shortname=winnt 0 0

Then in my package update tool I have a hook which syncs like this:


efisync() {
if [ -d "${EFISRC}/EFI" -a -d "${EFIDEST}/EFI" ]; then
rsync --archive --delete --verbose "${EFISRC}/EFI" "${EFIDEST}/"

BTW, another method could be to put /boot/efi on RAID1 with metadata
version 1.0 but that doesn't seem to be reliable, it works for some
systems but fails on others according to report I read.

No problem at all and I don't want to lose a swap device if a disk fails.
So it's the correct way to put in on RAID, IMHO.