DevHeads.net

Ceph-14.x.x, dropping 32-bit archs

Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs
starting in fedora-30/rawhide.

The upstream project doesn't support it. The armv7hl builders don't have
enough memory (or address space) to build some components.

And the other active maintainer (branto) and I don't have cycles to
devote to keeping it building on 32-bit archs.

(FWIW, currently ceph-12.2.9 (luminous) is in rawhide, f29, and f28 and
it has packages for i686 and armv7hl for people who want to run ceph on
32-bit archs.)

Comments

Re: Ceph-14.x.x, dropping 32-bit archs

By Richard W.M. Jones at 12/10/2018 - 14:22

Rather predictably this has broken libvirt on i686 and armv7hl
(ie. 32 bit arches):

DEBUG util.py:439: - package libvirt-daemon-driver-storage-4.10.0-1.fc30.i686 requires libvirt-daemon-driver-storage-rbd = 4.10.0-1.fc30, but none of the providers can be installed
DEBUG util.py:439: - conflicting requests
DEBUG util.py:439: - nothing provides librados.so.2 needed by libvirt-daemon-driver-storage-rbd-4.10.0-1.fc30.i686

I filed a bug, against libvirt since it appears that no client
library for ceph will be forthcoming:

<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1657928" title="https://bugzilla.redhat.com/show_bug.cgi?id=1657928">https://bugzilla.redhat.com/show_bug.cgi?id=1657928</a>

Rich.

Re: Ceph-14.x.x, dropping 32-bit archs

By Peter Robinson at 12/05/2018 - 22:50

Kaleb,

Firstly the title is misleading as there was no heads up, a heads up
is notice before you actually push the change, not when you do the
change.

As others have asked in the thread can we possibly build client only?

Re: Ceph-14.x.x, dropping 32-bit archs

By Kaleb S. KEITHLEY at 12/06/2018 - 08:04

On 12/5/18 9:50 PM, Peter Robinson wrote:
I suggest you take this up with branto. He's the one who built it
without 32-bit archs without any warning. I only found out about it when
I got build notices from koji (or pagure or whatever.)

You would have preferred no notice at all?

We?

Since the above seems to have been unclear:

But people can always send patches. ;-)

Re: Ceph-14.x.x, dropping 32-bit archs

By Kaleb S. KEITHLEY at 12/06/2018 - 08:11

If someone else would like to take over as maintainer, I'm happy to give
it up.

LMK.

Re: Ceph-14.x.x, dropping 32-bit archs

By Marcin Juszkiewicz at 12/05/2018 - 09:23

W dniu 05.12.2018 o 14:14, Kaleb S. KEITHLEY pisze:

BTW - how much memory is needed to build Ceph 14?

Re: Ceph-14.x.x, dropping 32-bit archs

By Dan =?ISO-8859-... at 12/05/2018 - 09:34

On Wed, 5 Dec 2018 14:23:49 +0100

have you tried building with reduced debuginfo, eg. -g1 or even -g0?

I wonder how much broken deps it will cause.

Dan

Re: Ceph-14.x.x, dropping 32-bit archs

By Kaleb S. KEITHLEY at 12/05/2018 - 09:45

On 12/5/18 8:34 AM, Dan Horák wrote:
More — apparently — than the armv7hl builders have. :-) branto may know.

branto told me that he has tried all the different optimization levels.

Don't know. Hence this heads up warning.

Re: Ceph-14.x.x, dropping 32-bit archs

By Daniel P. Berrange at 12/05/2018 - 10:31

On Wed, Dec 05, 2018 at 08:45:19AM -0500, Kaleb S. KEITHLEY wrote:
Is there any consideration given to only building the ceph client
pieces on 32-bit ? Presumably those parts are simpler and thus
not likely to hit the address/memory limits, and be more tractable
for supporting ?

I very much doubt people would run ceph server parts on 32-bit,
so any usage of ceph on 32-bit is likely to be limited to the
client pieces

repoquery can report on direct dependancies that would be broken by
any ceph packages being removed. There could be transitive ripples
out from there. From my own POV this would impact qemu & libvirt,
which would need to conditionally turn off their rbd support for
those archs.

Regards,
Daniel

Re: Ceph-14.x.x, dropping 32-bit archs

By Richard W.M. Jones at 12/05/2018 - 12:39

Forgot about librados, so there's actually a much bigger list:

# dnf repoquery -q --whatrequires 'librados.so.2()(64bit)'
ceph-common-1:12.2.8-1.fc29.x86_64
ceph-common-1:12.2.9-1.fc29.x86_64
ceph-radosgw-1:12.2.8-1.fc29.x86_64
ceph-radosgw-1:12.2.9-1.fc29.x86_64
ceph-test-1:12.2.8-1.fc29.x86_64
ceph-test-1:12.2.9-1.fc29.x86_64
fio-0:3.7-2.fc29.x86_64
librados-devel-1:12.2.8-1.fc29.x86_64
librados-devel-1:12.2.9-1.fc29.x86_64
libradosstriper1-1:12.2.8-1.fc29.x86_64
libradosstriper1-1:12.2.9-1.fc29.x86_64
librbd1-1:12.2.8-1.fc29.x86_64
librbd1-1:12.2.9-1.fc29.x86_64
librgw2-1:12.2.8-1.fc29.x86_64
librgw2-1:12.2.9-1.fc29.x86_64
libvirt-daemon-driver-storage-rbd-0:4.7.0-1.fc29.x86_64
nfs-ganesha-0:2.7.0-3.fc29.x86_64
nfs-ganesha-0:2.7.1-2.fc29.x86_64
nfs-ganesha-rados-grace-0:2.7.0-3.fc29.x86_64
nfs-ganesha-rados-grace-0:2.7.1-2.fc29.x86_64
python-rados-1:12.2.8-1.fc29.x86_64
python-rados-1:12.2.9-1.fc29.x86_64
python-rbd-1:12.2.8-1.fc29.x86_64
python-rbd-1:12.2.9-1.fc29.x86_64
python-rgw-1:12.2.8-1.fc29.x86_64
python-rgw-1:12.2.9-1.fc29.x86_64
python2-cradox-0:2.1.0-2.fc29.x86_64
python3-cradox-0:2.1.0-2.fc29.x86_64
python3-rados-1:12.2.8-1.fc29.x86_64
python3-rados-1:12.2.9-1.fc29.x86_64
python3-rbd-1:12.2.8-1.fc29.x86_64
python3-rbd-1:12.2.9-1.fc29.x86_64
python3-rgw-1:12.2.8-1.fc29.x86_64
python3-rgw-1:12.2.9-1.fc29.x86_64
qemu-block-rbd-2:3.0.0-1.fc29.x86_64
qemu-block-rbd-2:3.0.0-2.fc29.x86_64
rbd-fuse-1:12.2.8-1.fc29.x86_64
rbd-fuse-1:12.2.9-1.fc29.x86_64
rbd-mirror-1:12.2.8-1.fc29.x86_64
rbd-mirror-1:12.2.9-1.fc29.x86_64
rbd-nbd-1:12.2.8-1.fc29.x86_64
rbd-nbd-1:12.2.9-1.fc29.x86_64
scsi-target-utils-rbd-0:1.0.70-4.fc28.x86_64
xrootd-ceph-1:4.8.4-2.fc29.x86_64
xrootd-ceph-1:4.8.5-2.fc29.x86_64

Rich.

Re: Ceph-14.x.x, dropping 32-bit archs

By Tomasz Torcz at 12/05/2018 - 13:02

“Client bits” also means stuff needed for accessing file-system
interface of Ceph:
– nfs-ganesha-ceph.x86_64
– ceph-fuse

and for some users – the rados gateway bits:
– nfs-ganesha-rgw.x86_64
- ceph-radosgw.x86_64

Re: Ceph-14.x.x, dropping 32-bit archs

By Richard W.M. Jones at 12/05/2018 - 12:36

On Wed, Dec 05, 2018 at 02:31:10PM +0000, Daniel P. Berrangé wrote:
As Dan says, I'd like to know if you (Kaleb) considered building only
the client bits (librbd1 I think?). It's something which libguestfs
needs too albeit indirectly.

Assuming I've got the right command, the complete list of reverse
dependencies for the client side of Ceph is:

# repoquery -q --whatrequires 'librbd.so.1()(64bit)'
ceph-common-1:12.2.8-1.fc29.x86_64
ceph-common-1:12.2.9-1.fc29.x86_64
ceph-test-1:12.2.8-1.fc29.x86_64
ceph-test-1:12.2.9-1.fc29.x86_64
fio-0:3.7-2.fc29.x86_64
librbd-devel-1:12.2.8-1.fc29.x86_64
librbd-devel-1:12.2.9-1.fc29.x86_64
libvirt-daemon-driver-storage-rbd-0:4.7.0-1.fc29.x86_64
python-rbd-1:12.2.8-1.fc29.x86_64
python-rbd-1:12.2.9-1.fc29.x86_64
python3-rbd-1:12.2.8-1.fc29.x86_64
python3-rbd-1:12.2.9-1.fc29.x86_64
qemu-block-rbd-2:3.0.0-1.fc29.x86_64
qemu-block-rbd-2:3.0.0-2.fc29.x86_64
rbd-fuse-1:12.2.8-1.fc29.x86_64
rbd-fuse-1:12.2.9-1.fc29.x86_64
rbd-nbd-1:12.2.8-1.fc29.x86_64
rbd-nbd-1:12.2.9-1.fc29.x86_64
scsi-target-utils-rbd-0:1.0.70-4.fc28.x86_64

Rich.

Re: Ceph-14.x.x, dropping 32-bit archs

By Marcin Juszkiewicz at 12/05/2018 - 10:02

W dniu 05.12.2018 o 14:45, Kaleb S. KEITHLEY pisze:
Random Fedora armhf builder hardware info:

Memory:
total used free shared buff/cache available
Mem: 24929616 103092 24375448 348 451076 24524500
Swap: 18869244 21348 18847896

Storage:
Filesystem Size Used Avail Use% Mounted on
/dev/vda2 135G 5.7G 122G 5% /
Seriously 24 GB of ram + 18 GB of swap is not enough to build ceph? That's
more real memory than x86-64 builder have (15 387 432 ram + 134 216 700 swap).

I understand "we drop because upstream does not care about 32bit" reason.

Re: Ceph-14.x.x, dropping 32-bit archs

By Dan =?ISO-8859-... at 12/05/2018 - 10:08

On Wed, 5 Dec 2018 15:02:41 +0100

The problem is usually in the 4GB address space on 32-bit platforms
(much less is available for user data), when it builds large C++
codebase. Either the compiler OOMs or the linker OOMs. That's why
reducing the generated debuginfos often helps.

Dan