Postings by Fabian Arrotin

CentOS 7.7.1908 being available in CBS/koji

As you probably saw it, we released CentOS 7.7.1908 for all
architectures earlier today :
Just after that we regenerated the koji internal repos to use that new
release so it's now done :
<a href=";owner=koji&amp;view=flat&amp;method=createrepo&amp;order=-id" title=";owner=koji&amp;view=flat&amp;method=createrepo&amp;order=-id">;owner=koji&amp;view=flat&amp;method=...</a>

That means that all your builds for CentOS 7 will be built against
7.7.1908 content.

- planned maintenance/outage :

Due some maintenance/upgrade on the storage node holding all artifacts
used on koji/CBS (<a href="" title=""></a>), we'll have to shutdown
(maintenance) the koji/cbs service.

Migration is scheduled for """"Tuesday July 9th, 7:00 am UTC time"""".
You can convert to local time with $(date -d '2019-07-09 07:00 UTC')

The expected "downtime" is estimated to ~60 minutes , time needed to
reinstall the storage node and have ansible reconfiguring it.

Thanks for your comprehending and patience.

on behalf of the Infra team,

CentOS 8.0.1905 build status


As everybody is probably aware now, RHEL 8.0 was released earlier this
week .

Instead of publishing multiple blog posts here and then point to updated
content, we decided this time to have a dedicated wiki page that can be
used to track our current status : <a href="" title=""></a>

So now you can look at that page while we're busy on those tasks, and
refresh from time to time.

Let's spread the news about the wiki page and point people (on
mailing-lists, irc, forums, etc) to that page to get all latest news
about CentOS 8.0.1905 build status !

Cheers ! : Delayed migration

Due to unexpected circumstances , we have to delay the
I'll post an update when we'll have all details, but it seems it would
be only delayed for some days.

Thanks for your comprehending, migration, please read

As pre-announced a long time ago (see
<a href="" title=""></a>) ,
we'll migrate to a new host/platform (pagure/repospanner)

The current migration is planned for : April 8th (more details later
about the exact hour, as we need to have acknowledgement from all
involved people).

Instead of writing a long email, we instead decided to put focus on a
dedicated page, that will , after migration, replace the current one.
- current one : <a href="" title=""></a>
- new one : <a href="" title=""></a>

After migration, we' migration, please read

As pre-announced a long time ago (see
<a href="" title=""></a>) ,
we'll migrate to a new host/platform (pagure/repospanner)

The current migration is planned for : April 8th (more details later
about the exact hour, as we need to have acknowledgement from all
involved people).

Instead of writing a long email, we instead decided to put focus on a
dedicated page, that will , after migration, replace the current one.
- current one : <a href="" title=""></a>
- new one : <a href="" title=""></a>

After migration, we' upgraded to pagure 5.3 (beta) and new feature like mqtt notifications


In October 2018, we (pre)announced that we were working on a
consolidated pagure infra for and
(<a href="" title=""></a>)
We asked feedback from community and one of the remarks we had was about
being able to follow commits for all /rpms/*

While there were requests to have that available through pagure API
(for people already using the gitblit RPC feature of current
<a href="" title=""></a> instance), another idea was to simply send that
to a message broker, so that people can just subscribe to that broke

what about CentOS7 ppc64 ?

Just wondering the status of CentOS 7 ppc64, as 7.6.1810 was released on
December 3rd 2018 for all arches *but* ppc64, and no news since.

In the CentOS Infra, I had to maintain ppc64 hosts for both cbs/koji and
other nodes, but as recently, someone decided to archive 7.5.1804 to
vault, and also that there is no ppc64 anymore on mirrors, one having to
deal with ppc64 is now in a position where :
- it's impossible to install new ppc64 host
- it's impossible to just "yum install" anything either

Can we get an official statement from the ppc64 maintainer about this ?
Or should we consider that a

CentOS 7.6.1810 UEFI/Shim issue .. feedback wanted !

We got some reports from people unable to reboot their nodes after
updating to 7.6.1810, and basically to newer shim (v15).
It seems to affect only nodes in UEFI mode, but without SecureBoot.

We wrote that in the ReleaseNotes, including a link to the bug report :
<a href="" title=""></a>

We have now a workaround in that bug report, and also a new interim
build (not signed by Microsoft yet) but we need feedback , as the only
node on which I could test this myself is my old 2008 iMac computer ..

CentOS 7.6.1810 release available for builds in CBS/Koji


As announced
(<a href="" title=""></a>)
, CentOS 7.6.1810 is now available and realeased for the following
arches : x86_64, aarch64, ppc64le, armhfp
So that release is now the default one for the CBS/koji build tasks (and
all koji regen-repos tasks are currently running)

That means that for your next builds, you'll be building against
7.6.1810 (even if technically that was already the case with the
7.5.1804+updates+CR repositories)

That should also mean the end of the "freeze" wrt pushing tagged pkgs
for release to

: unexpected DC issue

Due to a DC outage where some nodes are hosted, the following (public)
services provided by the CentOS Infra are currently down/unreachable :

- <a href="" title=""></a>
- <a href="" title=""></a>

After contact with the DC, they're trying to restore services as fast as
possible, but seems related to power outage.

More info when services will be back online, or that we'll have received
status update.

Small service interruption :

Due to a HDD swap/replacement, we'll have to shut down a node, proceed
with maintenance, and restart the server

Migration is scheduled for """"Tuesday 9th October, 4:00 pm UTC time"""".
You can convert to local time with $(date -d '2018-10-9 16:00 UTC')

The expected "downtime" is estimated to ~20 minutes.

Affected services:
- some internal services for centos infra
- <a href="" title=""></a>

Thanks for your comprehending and patience.

on behalf of the Infra team,

mirrorlist code updated / SIGs and AltArch support rolled-in


Recently I had to update the existing code running behind (the service that returns you a list of validated
mirrors for yum, see the /etc/yum.repos.d/CentOS*.repo file) as it was
still using the Maxmind GeoIP Legacy country database. As you can
probably know, Maxmind announced that they're discontinuing the Legacy
DB, so that was one reason to update the code.

SecureBoot : rolling out new shim pkgs for CentOS 7.5.1804 in CR repository - asking for testers/feedback

When we consolidated all CentOS Distro builders in a new centralized
setup, covering all arches (so basically x86_64, i386, ppc64le, ppc64,
aarch64 and armhfp those days), we wanted also to add redundancy where
it was possible to.

The interesting "SecureBoot" corner case came on the table and we had to
find a different way to build the following packages:
- shim (both signed and unsigned)
- grub2
- fwupdate
- kernel

The other reason why we considered rebuilding it is that the cert we
were using has expired :

curl --location --silent
<a href="" title=""></a>

- planned outage : {bugs, fr, status}

Due to a hardware replacement, we'll have to power down the hypervisor
hosting several VMs, including the ones used for the following impacted
services :

* <a href="" title=""></a>
* <a href="" title=""></a>
* <a href="" title=""></a>

Hardware maintenance is scheduled for """"Thursay May 24th, 12:00 pm UTC
You can convert to local time with $(date -d '2018-05-24 12:00 UTC')

Root cause : a HDD in the array used to host the VMs failed, and there
is no hot-swap capabilities, so full "power off" is required to replace
the failed HDD and then start the rebuild operation.

CentOS DevCloud environment down

We have a major service disruption with our DevCloud RDO environment
(<a href="" title=""></a>) right now so it's currently unusable.

The underlying storage node (over infiniband) used by the openstack
compute nodes is actually powered off, and in unusable state.

Proposal : potential "freeze" for SIG content to be built and/or pushed to mirrors


As you're all aware, RHEL 7.5 was released yesterday, and at the CentOS
side, the massive rebuild started for all arches (x86_64, i386, ppc64,
ppc64le, aarch64 and armhfp).

We'd like to have your opinion about the following proposal (TBD) :
- we freeze the signing process for now, while we have focus on the
distro rebuild.
- we can also eventually freeze the CBS builders, so that all SIGs are
now waiting for 7.5.1804 content to be available in CBS/koji and so
rebuild against those pkgs

We hope (but can't have any estimate) that for this release, all arches
should be in "sync", so normal

- planned outage : (Bug Tracker)

This notification to let you know that we'll migrate our
<a href="" title=""></a> bug tracker service to a different node, and
also updated to a new version of MantisBT (version 2.x)

This migration will give us more possibilities, including soon trying to
get SSO working even on our bug tracker (that part still need to be done

Migration is scheduled for """"Tuesday February 20th, 8:00 am UTC time"""".
You can convert to local time with $(date -d '2018-2-20 8:00 UTC')

The expected "downtime" is estimated to ~10 minutes , time needed to
update/propagate updated dns A/AAAA records + las

Cleaning-up Extras for inconsistencies (aka golang in aarch64)

Last week I tried to build a pkg for infra, that has all the arches
enabled (so x86_64, ppc64, ppc64le and aarch64)

That package has "BuildRequires: golang >= 1.8" , which is in the base
distro but the --scratch build failed starting with aarch64 and
complaining with
"DEBUG Error: No Package found for golang >= 1.8 "

So, while there is a golang 1.8.3 package in [base], there is still an
old version in [extras] :
golang.aarch64 1.6.3-2.el7

And because koji will not pick automatically the higher ENVR, but rather
a pkg bas

- planned outage : All services

Due to important security updates that we need to apply within the whole
CentOS infra, please be aware that some nodes/services will be
unresponsive during that maintenance window.

As it targets both CentOS 6 and CentOS 7, we'll probably apply those
updates ASAP but we'll not notify for each impacted service, so this
announce will cover all impacted services during the next hours/days.

Thanks for your comprehending and patience.

on behalf of the Infra team,

IBM ppc64{le} builders offline for CBS/koji


Our monitoring platform informed us that the ppc64{le} builders used in
our Koji environment are actually unreachable (same for the underlying
IBM P8 hypervisor node)
After investigation it seems that it's hardware issue

I'll work with the DC people next monday to see how we can try to fix
the issue and I'll also investigate some options to have at least one
builder per arch asap (so ppc64 and ppc64le)

That means that people building for multiple arches will be impacted as
long as those builders will not be back online (or some equivalent node
that can take those $arch jobs)

I'll keep

CERN pre-dojo meeting topic : Packages signing

Package Signing
- SIG chairs should request feedback/insight into the package signing
process --> KB
- sometimes, there is a delay in package signing/sync to
- have keys been generated securely (known bugs in package versions that
make less secure keys?)

Just a copy/paste from the meeting minutes here, to start a dedicated
I can't answer that, but I'll let involved people participate in that

CERN pre-dojo meeting topic : Sig request for sig specific git

sigs would like to use centpkg / lookaside, build direct through git to koji
authentication requirements to
Fabian to evaluate git solutions and report back to sig chairs.
mrunge has volunteered to be the "guinea pig" of the new system

So let's start the thread, to be sure that all involved people would be
able to comment.
SIGs would like to start building from git, and not from SRPMs they have
to create/upload themselves.

For GIT itself, several options exist :
- using : that would mean SIG would need access to
specific repositories and al

CERN pre-dojo meeting topic : build bots accounts for CBS

Here are some notes taken from the CERN pre-dojo meeting from last week :

Allow SIGs to have separate accounts for build bots
- separate user accounts from "bot" accounts for security reasons
- [proposal] have an email alias (not list) per sig for the bots, like
sig-<bla> pointing to the SIG's chair
- [proposal] SIG chair must request or approve email alias requests/
ACO account creation sent to CentOS Board chairman

So, (as also discussed yesterday in the CBS meeting -
<a href="" title=""></a>)

The pro

meetbot/centbot meeting minutes available after each new meeting

When we started to expose the minutes of the meetings held in
#centos-devel (on on the website some time ago, it was
mandatory for someone from infra team to pull/organize/push the minutes
files to <a href="" title=""></a>.

Thanks to the work that was done by John R Dennison (Bahumbug nick name
on irc) we automated the work behind the scene so that after each
meeting, when a SIG chair terminates the meetbot meeting, the full link
to the meeting minutes will be displayed in #centos-devel and those will
be directly available under$path

Upcoming Openstack/RDO (Pike release) cloud in CI environment


Some of you know that we deployed initially an RDO/Openstack cloud in
the CI environment, so that it would be possible to get cloud
instances/VMs instead of bare metal when having to test something.
Such VMs could/can also be used to add multiple jenkins slaves when needed.

The needed work in Duffy wasn't yet merged, so the CI projects couldn't
already request a VM (yet).

With the fact the it was based on Newton (now EOL) and that also we're
slowly adding other arches (like aarch64, ppc64, ppc64le) in CI, we
discussed the best way to add those arches to existing cloud, but that's
not po

Expiring/Disabling ACO account[s]

It was already discussed several times during CBS meetings [1] but in
the last one held yesterday [2] we decided to enforce the following rules :

- People with active account on <a href="" title=""></a> who had a
TLS cert expiring in the following two weeks (now 3 weeks) will get a
weekly mail notification (already in place for some months now) to
invite them to renew their TLS cert

= New
- People still marked as "active" in ACO but who ignored the weekly
reminder will get a different reminder when their cert expired.

CentOS 7.4.1708 as basic el7 buildroot in CBS


As discussed on the list, we wanted to push 7.4.1708 first to CBS/Koji
and at the same time to CI (so that what's built against a specific tree
can be tested against the same set of pkgs in CI)

We'll so make it 7.4.7108 available within CBS in the next hours, and
for *all* arches actually covered within CBS.

That means that we'll switch (before it's pushed to external mirrors and
so be announced) the buildroots to start using the following repositories:


That will be so done in parallel for x86_64,aarch64,ppc64,ppc64le .

We're just launching some sanity te

DevCloud migration (users, please read)


Some of you actually use our DevCloud (<a href="" title=""></a>)
as a playground/sandbox environment when willing to test some
deployments/scratch operations.

We want you to notice that we'll completely refresh that environment
next week, so DevCloud itself will be unavailable during the reinstall
phase, but also all VMs currently running in that environment will also
be destroyed (it's a "cloud" and for dev/testing so everything could be
considered ephemeral )

Details of the migration:
- setting up two new dedicated storage boxes (each using SSD disks for
faster storage backend

Configuration Management SIG status and new pkg proposal : ARA (Ansible Run Analysis)


While technically speaking a ConfigManagement SIG was approved a long
time ago
(<a href="" title=""></a>) it
seems that nothing was built at all, and that initial people probably
lost interest in that idea (?)

The interesting part is that other SIGs that were relying on such config
management tools (like puppet or ansible) built it themselves in their
own tags :
- puppet : <a href="" title=""></a>
- ansible : <a href="" title=""></a>

I was myself just the "sponsor" for other community peop