DevHeads.net

Postings by Fabian Arrotin

Cleaning-up Extras for inconsistencies (aka golang in aarch64)

Last week I tried to build a pkg for infra, that has all the arches
enabled (so x86_64, ppc64, ppc64le and aarch64)

That package has "BuildRequires: golang >= 1.8" , which is in the base
distro but the --scratch build failed starting with aarch64 and
complaining with
"DEBUG util.py:417: Error: No Package found for golang >= 1.8 "

So, while there is a golang 1.8.3 package in [base], there is still an
old version in [extras] :
golang.aarch64 1.6.3-2.el7
extras

And because koji will not pick automatically the higher ENVR, but rather
a pkg bas

- planned outage : All services

Due to important security updates that we need to apply within the whole
CentOS infra, please be aware that some nodes/services will be
unresponsive during that maintenance window.

As it targets both CentOS 6 and CentOS 7, we'll probably apply those
updates ASAP but we'll not notify for each impacted service, so this
announce will cover all impacted services during the next hours/days.

Thanks for your comprehending and patience.

on behalf of the Infra team,

IBM ppc64{le} builders offline for CBS/koji

Hi,

Our monitoring platform informed us that the ppc64{le} builders used in
our Koji environment are actually unreachable (same for the underlying
IBM P8 hypervisor node)
After investigation it seems that it's hardware issue

I'll work with the DC people next monday to see how we can try to fix
the issue and I'll also investigate some options to have at least one
builder per arch asap (so ppc64 and ppc64le)

That means that people building for multiple arches will be impacted as
long as those builders will not be back online (or some equivalent node
that can take those $arch jobs)

I'll keep

CERN pre-dojo meeting topic : Packages signing

<paste>
Package Signing
- SIG chairs should request feedback/insight into the package signing
process --> KB
- sometimes, there is a delay in package signing/sync to mirror.centos.org
- have keys been generated securely (known bugs in package versions that
make less secure keys?)
</paste>

Just a copy/paste from the meeting minutes here, to start a dedicated
thread.
I can't answer that, but I'll let involved people participate in that
thread.

CERN pre-dojo meeting topic : Sig request for sig specific git

<paste>
sigs would like to use centpkg / lookaside, build direct through git to koji
authentication requirements to accounts.centos.org
Fabian to evaluate git solutions and report back to sig chairs.
mrunge has volunteered to be the "guinea pig" of the new system
</paste>

So let's start the thread, to be sure that all involved people would be
able to comment.
SIGs would like to start building from git, and not from SRPMs they have
to create/upload themselves.

For GIT itself, several options exist :
- using git.centos.org : that would mean SIG would need access to
specific repositories and al

CERN pre-dojo meeting topic : build bots accounts for CBS

Here are some notes taken from the CERN pre-dojo meeting from last week :

<paste>
Allow SIGs to have separate accounts for build bots
- separate user accounts from "bot" accounts for security reasons
- [proposal] have an email alias (not list) per sig for the bots, like
sig-<bla>@centos.org pointing to the SIG's chair
- [proposal] SIG chair must request or approve email alias requests/
ACO account creation sent to CentOS Board chairman
</paste>

So, (as also discussed yesterday in the CBS meeting -
<a href="https://www.centos.org/minutes/2017/October/centos-devel.2017-10-23-14.01.log.html" title="https://www.centos.org/minutes/2017/October/centos-devel.2017-10-23-14.01.log.html">https://www.centos.org/minutes/2017/October/centos-devel.2017-10-23-14.0...</a>)

The pro

meetbot/centbot meeting minutes available after each new meeting

When we started to expose the minutes of the meetings held in
#centos-devel (on irc.freenode.net) on the website some time ago, it was
mandatory for someone from infra team to pull/organize/push the minutes
files to <a href="https://www.centos.org/minutes" title="https://www.centos.org/minutes">https://www.centos.org/minutes</a>.

Thanks to the work that was done by John R Dennison (Bahumbug nick name
on irc) we automated the work behind the scene so that after each
meeting, when a SIG chair terminates the meetbot meeting, the full link
to the meeting minutes will be displayed in #centos-devel and those will
be directly available under https://www.centos.org/minutes/$path

Upcoming Openstack/RDO (Pike release) cloud in CI environment

Hi,

Some of you know that we deployed initially an RDO/Openstack cloud in
the CI environment, so that it would be possible to get cloud
instances/VMs instead of bare metal when having to test something.
Such VMs could/can also be used to add multiple jenkins slaves when needed.

The needed work in Duffy wasn't yet merged, so the CI projects couldn't
already request a VM (yet).

With the fact the it was based on Newton (now EOL) and that also we're
slowly adding other arches (like aarch64, ppc64, ppc64le) in CI, we
discussed the best way to add those arches to existing cloud, but that's
not po

Expiring/Disabling ACO account[s]

It was already discussed several times during CBS meetings [1] but in
the last one held yesterday [2] we decided to enforce the following rules :

- People with active account on <a href="https://accounts.centos.org" title="https://accounts.centos.org">https://accounts.centos.org</a> who had a
TLS cert expiring in the following two weeks (now 3 weeks) will get a
weekly mail notification (already in place for some months now) to
invite them to renew their TLS cert

= New
- People still marked as "active" in ACO but who ignored the weekly
reminder will get a different reminder when their cert expired.

CentOS 7.4.1708 as basic el7 buildroot in CBS

Hi,

As discussed on the list, we wanted to push 7.4.1708 first to CBS/Koji
and at the same time to CI (so that what's built against a specific tree
can be tested against the same set of pkgs in CI)

We'll so make it 7.4.7108 available within CBS in the next hours, and
for *all* arches actually covered within CBS.

That means that we'll switch (before it's pushed to external mirrors and
so be announced) the buildroots to start using the following repositories:

7.4.1708/{os,updates,extras}

That will be so done in parallel for x86_64,aarch64,ppc64,ppc64le .

We're just launching some sanity te

DevCloud migration (users, please read)

Hi,

Some of you actually use our DevCloud (<a href="https://wiki.centos.org/DevCloud" title="https://wiki.centos.org/DevCloud">https://wiki.centos.org/DevCloud</a>)
as a playground/sandbox environment when willing to test some
deployments/scratch operations.

We want you to notice that we'll completely refresh that environment
next week, so DevCloud itself will be unavailable during the reinstall
phase, but also all VMs currently running in that environment will also
be destroyed (it's a "cloud" and for dev/testing so everything could be
considered ephemeral )

Details of the migration:
- setting up two new dedicated storage boxes (each using SSD disks for
faster storage backend

Configuration Management SIG status and new pkg proposal : ARA (Ansible Run Analysis)

Hi,

While technically speaking a ConfigManagement SIG was approved a long
time ago
(<a href="https://wiki.centos.org/SpecialInterestGroup/ConfigManagementSIG" title="https://wiki.centos.org/SpecialInterestGroup/ConfigManagementSIG">https://wiki.centos.org/SpecialInterestGroup/ConfigManagementSIG</a>) it
seems that nothing was built at all, and that initial people probably
lost interest in that idea (?)

The interesting part is that other SIGs that were relying on such config
management tools (like puppet or ansible) built it themselves in their
own tags :
- puppet : <a href="https://cbs.centos.org/koji/packageinfo?packageID=390" title="https://cbs.centos.org/koji/packageinfo?packageID=390">https://cbs.centos.org/koji/packageinfo?packageID=390</a>
- ansible : <a href="https://cbs.centos.org/koji/packageinfo?packageID=1947" title="https://cbs.centos.org/koji/packageinfo?packageID=1947">https://cbs.centos.org/koji/packageinfo?packageID=1947</a>

I was myself just the "sponsor" for other community peop

CBS Outage : ppc64/ppc64le builders in offline mode

Hi

As discussed in CBS meeting today
(<a href="https://www.centos.org/minutes/2017/may/centos-devel.2017-05-22-14.00.html" title="https://www.centos.org/minutes/2017/may/centos-devel.2017-05-22-14.00.html">https://www.centos.org/minutes/2017/may/centos-devel.2017-05-22-14.00.html</a>),
and also reflected on <a href="https://status.centos.org" title="https://status.centos.org">https://status.centos.org</a> , we currently suffer
from a hardware issue that impacts our connectivity to some nodes,
including the ppc64/ppc64le builders.

That means that SIG people willing to build against/for those arches
will not be able to see their build jobs/tasks running as long as
connectivity isn't restored (and no ETA yet)
Worth noting that those builders are disabled at the koji level
(<a href="https://cbs.centos.org/koji/hosts?state=all&amp;order=name" title="https://cbs.centos.org/koji/hosts?state=all&amp;order=name">https://cbs.centos.org/koji/hosts?state=all&amp;order=name</a>)

We'll

updated external-repos for CBS with CentOS 6.9

As we announced/released 6.9 today
(<a href="https://lists.centos.org/pipermail/centos-announce/2017-April/022351.html" title="https://lists.centos.org/pipermail/centos-announce/2017-April/022351.html">https://lists.centos.org/pipermail/centos-announce/2017-April/022351.html</a>)
, we reflected that in the internal mirror used by koji/cbs.centos.org
so from now on, all built content will be built against 6.9.

We regenerated also all the metadata from within koji to reflect this.

Should you have issues, feel free to discuss those on this list, or
#centos-devel on irc.freenode.net

- Pre-announced major outage for several services (please read)

Due to some reorganization at the DC/Cage level, we'll have to
shutdown/move/reconfigure a big part of our hosted infra for the
following services :
- cbs.centos.org (Koji)
- accounts.centos.org (auth backend)
- ci.centos.org (jenkins-driven CI environment)

We're working on a plan to minimize the downtime/reconfiguration part,
but at first sight, due to the hardware move of the racks/recabling
parts/etc, the announced downtime will be probably ~48h.

What does that mean ?

TLS/https for {buildlogs,cloud}.centos.org

Hi,

Last week we enabled https for buildlogs.centos.org and
cloud.centos.org, but we haven't (yet) enforced the redirection.

So both are available over http:// and https://

For buildlogs, as we already have redirection in place (RewriteRules)
for content to the backend (see
<a href="https://lists.centos.org/pipermail/centos-devel/2016-March/014552.html" title="https://lists.centos.org/pipermail/centos-devel/2016-March/014552.html">https://lists.centos.org/pipermail/centos-devel/2016-March/014552.html</a>)
, we can enforce the redirection without any issue (and good news is
that the CDN backend also support https so it will be https end-to-end),
so I'll implement the additional http=>https redirect soon

My only concern is about doing the same for clou

building live media on CBS

There was a discussion earlier today in #centos-devel about a way to
build live iso images on cbs.centos.org.

Koji seems to permit to do that through "koji spin-livecd"

Documentation is available here : <a href="https://docs.pagure.org/koji/image_build/" title="https://docs.pagure.org/koji/image_build/">https://docs.pagure.org/koji/image_build/</a>

One thing to keep in mind though is that one needs to have livecd-tools
pkg installed, and corresponding to the os distro/version the livecd
will be based on : you need a centos 7 host to build a centos 7 liveCD
and that can't work to build a centos 6 liveCD (see as an example
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1035276" title="https://bugzilla.redhat.com/show_bug.cgi?id=1035276">https://bugzilla.redhat.com/show_bug.cgi?id=1035276</a>)

Depending on the t

: Hardware migration status (full/correct version)

As announced some time ago (
<a href="https://lists.centos.org/pipermail/centos-announce/2016-September/022065.html" title="https://lists.centos.org/pipermail/centos-announce/2016-September/022065.html">https://lists.centos.org/pipermail/centos-announce/2016-September/022065...</a>
) , there was a big hardware migration/DC relocation happening yesterday
that impacted several of our public services.
Here is a small report of that migration, that finally took less time
that previously announced.

* 2016-10-10 - 1pm UTC : we started to preconfigure all changes, and
powered down all the services in correct order
* 2016-10-10 - 2pm => 6pm UTC : hardware was moved/migrated
* 2016-10-10 - 6pm => 8pm UTC : hardware was (in serial order) plugged
back in and ipmi/oob interfac

: Hardware migration status

As announced some time ago (
<a href="https://lists.centos.org/pipermail/centos-announce/2016-September/022065.html" title="https://lists.centos.org/pipermail/centos-announce/2016-September/022065.html">https://lists.centos.org/pipermail/centos-announce/2016-September/022065...</a>
) , there was a big hardware migration/DC relocation happening yesterday
that impacted several of our public services.
Here is a small report of that migration, that finally took less time
that previously announced.

* 2016-10-10 - 3pm UTC : we started to preconfigure all changes, and
powered down all the services in correct order
* 2016-10-10 - 3pm UTC
* 2016-10-10 - 3pm UTC
* 2016-10-10 - 3pm UTC
* 2016-10-10 - 3pm UTC

- CentOS Bug tracker migration / maintenance window

We have to upgrade our current MantisBT instance for the CentOS bug
tracker service (aka <a href="https://bugs.centos.org" title="https://bugs.centos.org">https://bugs.centos.org</a>)

Migration is scheduled for """"Friday october 7th, 12:15 pm UTC time"""".
You can convert to local time with $(date -d '2016-10-7 12:15 UTC'')

The expected "downtime" is estimated to ~15 minutes , time needed to
update MantisBT code, run the mysql schema update, and then put the node
back in production mode.

Thanks for your comprehending and patience.

on behalf of the Infra team,

- Major outage for several services (please read)

As pre-announced earlier this year
(<a href="https://lists.centos.org/pipermail/centos-devel/2016-May/014792.html" title="https://lists.centos.org/pipermail/centos-devel/2016-May/014792.html">https://lists.centos.org/pipermail/centos-devel/2016-May/014792.html</a>)
we'll have to move a part of our existing hardware to a new DC.
That means that the following public services will be powered off and
unreachable :

- <a href="https://cbs.centos.org" title="https://cbs.centos.org">https://cbs.centos.org</a> (Koji build farm front end and also
builders/storage nodes behind)
- <a href="https://accounts.centos.org" title="https://accounts.centos.org">https://accounts.centos.org</a> (auth backend)
- <a href="https://ci.centos.org" title="https://ci.centos.org">https://ci.centos.org</a> (jenkins-driven CI environment and all nodes in
that dedicated/isolated environment)

What does that mean ?
- Nobody from the SIGs (<a href="https://wiki.centos.org/SpecialInterestGroup" title="https://wiki.centos.org/SpecialInterestGroup">https://wiki.centos.org/SpecialInterestGroup</a>)
will b

: DevCloud maintenance window notification : September 5th, 5:30 am UTC

We'll have to shutdown our DevCloud [https://wiki.centos.org/DevCloud]
for maintenance reasons :
- upgrade from gluster 3.6.x to gluster 3.8.x
- change the gluster bricks layout for better performances (adding a
second disk in each node and using underlying striped LV for each brick
in the gluster setup)
- moving the files between gluster volumes

Important notice : if you are a CentOS contributor/developer having some
Virtual Machines in DevCloud, ensure that everything is correctly setup
on your side, as we'll only (from an hypervisor PoV) issue acpi shutdown
to the (still running) VMs.
So i

Question about authoritative SIG for specific pkg[s] (wrt ConfigMgmt SIG)

I just had a look at CBS and was wondering how one SIG (so not
ConfigMgmt SIG specific, but let's use that as an example) can interact
with other SIGs.

One example is Ansible : it seems some other SIGs are relying on it and
so actually the ConfigMgmgt SIG isn't able to build it as it's already
built with the same ENVR but in a different target/tag :
<a href="https://cbs.centos.org/koji/packageinfo?packageID=1947" title="https://cbs.centos.org/koji/packageinfo?packageID=1947">https://cbs.centos.org/koji/packageinfo?packageID=1947</a>

What would be the option for this ?
Actually the direct option is for the ConfigMgmt SIG to just tag that
build (for example for ansible-2.1.0.0-1.el7) so that it will appear in
the correct

CBS users : x509 user certificates expiring policy

Due to some reports in #centos-devel on irc.freenode.net, we thought it
would be good to send a reminder at least on this list about the x509
cert/policy for cbs.centos.org :

If you have an account on <a href="https://accounts.centos.org" title="https://accounts.centos.org">https://accounts.centos.org</a> and are an active
builder/developer using <a href="https://cbs.centos.org" title="https://cbs.centos.org">https://cbs.centos.org</a> to build pkgs for one or
multiple SIGs, it's worth knowing that initially we had the ~6 months
validity period for certificates generated through ACO.

We documented the "Expired Certificate" error on the dedicated wiki page
(see
<a href="https://wiki.centos.org/HowTos/CommunityBuildSystem?#head-e81bf95796a59cbfd" title="https://wiki.centos.org/HowTos/CommunityBuildSystem?#head-e81bf95796a59cbfd">https://wiki.centos.org/HowTos/CommunityBuildSystem?#head-e81bf95796a59cbfd</a>

Important infra outage notification - dates to be discussed

Due to some reorganization at the DC/Cage level, we'll have to
shutdown/move/reconfigure a big part of our hosted infra for the
following services :
- cbs.centos.org (Koji)
- accounts.centos.org (auth backend)
- ci.centos.org (jenkins-driven CI environment)

We're working on a plan to minimize the downtime/reconfiguration part,
but at first sight, due to the hardware move of the racks/recabling
parts/etc, the announced downtime will be probably ~48h.

What does that mean ?

: DevCloud maintenance window notification

Due to recent changes in the racks hosting the DevCloud nodes
(<a href="https://wiki.centos.org/DevCloud" title="https://wiki.centos.org/DevCloud">https://wiki.centos.org/DevCloud</a>), we'll have to reorganize the
physical placement of those nodes.
That means that we'll have to shut down/power off the whole DevCloud
infra, migrate the nodes in another rack, and slowly restart all the
services running on those nodes.

Important notice : if you are a CentOS contributor/developer having some
Virtual Machines in DevCloud, ensure that everything is correctly setup
on your side, as we'll only (from an hypervisor PoV) issue acpi shutdown
to the VMs

Other impacted services:
- <a href="http://pl" title="http://pl">http://pl</a>

Infra : changes to buildlogs.centos.org

Hi,

Just to let you know in advance that we'll add some modifications to the
buildlogs.centos.org nodes.
We got a proposal from a CDN infrastructure company (CDN77.com) willing
to be a sponsor for the CentOS Project, and so we'd like to use that
service for the testing/dev artifacts (so rpm packages, iso images,
qcow2 images, etc) so that users can get it faster than when served from
our actual buildlogs.centos.org nodes.

What does that mean for you ?

: DevCloud

Hi,

Today in the evening we suffered from a big power outage that
affected some of our infra services.
Some small public services like :
- - planet.centos.org
- - seven.centos.org
were down, but connection was restored and services back online.

We had to spend more time on the DevCloud setup
(<a href="https://wiki.centos.org/DevCloud" title="https://wiki.centos.org/DevCloud">https://wiki.centos.org/DevCloud</a>) : we first had to verify the
underlying block devices and status for that gluster setup (in
Distributed/Replicated mode), and then slowly restart the hypervisor
controller and then Virtual Machines.

scheduled infra outage : cbs.centos.org (Koji build farm)

Just to let you know that we have scheduled a maintenance window that
will impact the following service[s] :

- - cbs.centos.org

Migration is scheduled for """"Thursday December 3rd, 9:00 am UTC
time"""".
You can convert to local time with $(date -d '2015-12-3 9:00 UTC')

The expected "downtime" is estimated to ~2h.
During that time, the koji nodes (koji hub,web, builders) will be
unavailable.
We'll send a mail when the whole koji setup will be available again.

Thanks for your comprehending and patience.

on behalf of the Infra team,

CentOS 7 liveCD survey

Hi,

While working on the next 7.1511 Live media, I discovered that the
size for the actual CentOS 7 LiveCD would be more than 700MB.

It's due to some packages being now bigger and bigger, also due to the
big Gnome 3.8 -> 3.14 rebase.
One obvious package I can remove from the packages manifest (which
itself is consuming more and more space) is Firefox.

If I remove it from the packages manifest (only for LiveCD, it will
obviously stay for the LiveGnome and LiveKDE DVD iso images), it's
then back to 650 MB, so that would mean that one would still be able
to burn it on a CD.

But the real quest