DevHeads.net

Can we maybe reduce the set of packages we install by default a bit?

Heya,

today I installed the current Fedora 30 Workstation beta on my new
laptop. It was a bumpy ride, I must say (the partitioner (blivet?)
crashed five times or so on me, always kicking me out of anaconda
again, just because I wanted to undo something). But I don't really
want to discuss that. What I do want to discuss is this:

Can we maybe reduce the default set of packages a bit? In particular
the following ones I really don't think should be in our default
install:

1. multipathd. On a workstation, uh?? I obviously have no multipath
devices configured on my laptop, how would I even? Has anyone? This
is a really nasty one: to this day it pulls in udev settle, which
is really backwards, and slows down our boot considerably. No
current daemon should require udev settle, any daemon that still
does is just backwards because it assumes that hardware would
guarantee to have shown up at some specific time at boot, though in
today's world that's really not how this works: hardware can take
any time it wants, and thus instead of "waiting for everything" you
can reasonably just wait for the stuff you know you actually need,
based on your configuration. systemd-udev-settle.service however is
a compat kludge that is supposed to provide "wait for everything",
though this is racy and flaky. To say this clearly: anything that
still relied on systemd-udev-settle.service 5y ago was bad, but
still pulling that in today in 2019, and doing that in a default
fedora install is just bad bad bad. This alone costs half the boot
time on my system because it just waits for stuff for nothing, and
for what? And beyond that, this daemon is really ugly too: it logs
at high log levels during boot that it found no configuration and
hence nothing to do. Yes, obviously, but that's a reason to shut up
and proceed quickly, not to complain loudly about that so that it
even appears on the scren (I mean srsly, this is the first thing I
saw when i booted from the fedora live media: a log message printed
all over the screen that multipathd has no working
configuration...).

2. dmraid. Not quite as bad as multipathd as it is more likely to
exist on a workstation (still quite exotic though), but also pulls
in udev settle and hence should not be in our default boot. Much
like multipathd this should be fixed to not require udev settle
anymore, and in the absence of that at least not end up in the
default fedora boot process, except for those people who actually
have dmraid.

3. atd? Do we still need that? Do we have postinst scripts that need
this? If so, wouldn't systemd-run be a better approach for those?
Isn't it time to make this an RPM people install if they want it?

4. Similar crond. On my fresh install it's only used by "zfs-fuse",
which I really wonder why it even is in the default install? And
"mdadm" wants this too. (which would be great if it would just use
timer units)

5. libvirtd. Why is this running? Can't we make this socket
activatable + exit-on-idel? While I am sure it's useful on
workstations why run it all the time, given that only very few
users probably actually need that, and if they do starting it on
demand would be much more appropriate? On my freshly installed
system it is running all the time even though there are no VMs or
anything around.

Ideally, the top 4 wouldn't be installed at all anymore (in case of
the first two at least on the systems which do not need them). But if
that's not in the cards, it would be great to at least not enable
these services anymore in the default boot so that they are only a
"systemctl enable" away for people who need them?

I wonder the first one is rooted in a misconception about systemd's
unit condition concept: conditions are extremely lightwight: they just
bypass service start-up, that's all. They have no effect on whether
dependencies are pulled in before hand or not, and they are only
tested the instant the service is ready to be fork()ed off. This means
multipathd.service (which has
ConditionPathExists=/etc/multipathd.conf) pulls in
systemd-udev-settle.service regardless if the condition holds or
fails...

I guess I should file bz issues about all of the above, but I am not
sure against which packages? anaconda? comps (does that still exist)?
the individual packages?

It's also my hope that maybe some champion volunteers for tracking
down issues like this and fixing them? i.e. keeping udev settle out of
the default install alone would be a worthy goal for every release,
given that it doubles boot time on typical systems... Anyone up for
that?

Lennart

Comments

Re: Can we maybe reduce the set of packages we install by defaul

By Matthias Clasen at 04/16/2019 - 11:48

On Tue, Apr 9, 2019 at 12:08 PM Lennart Poettering < ... at 0pointer dot de>
wrote:

Another one I might add: "No stuck stop jobs" - it annoys me every single
time when I reboot and something like rngd or conmon holds up my reboot
for several minutes for no reason at all.

Re: Can we maybe reduce the set of packages we install by defaul

By Adam Williamson at 04/16/2019 - 12:06

On Tue, 2019-04-16 at 11:48 -0400, Matthias Clasen wrote:
I've seen the rngd stop thing, hadn't had time to investigate it yet as
more urgent fires keep showing up :/

Re: Can we maybe reduce the set of packages we install by defaul

By Zbigniew =?utf-... at 04/17/2019 - 17:26

On Tue, Apr 16, 2019 at 09:06:02AM -0700, Adam Williamson wrote:
I opened a bug a while back, but it hasn't cropped up since I enabled
additional logging:
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1690364" title="https://bugzilla.redhat.com/show_bug.cgi?id=1690364">https://bugzilla.redhat.com/show_bug.cgi?id=1690364</a>

Zbyszek

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/17/2019 - 04:38

What's the story anyway for rngd? Why would userspace be better at
providing entropy to the kernel than the kernel itself? Why do we
enable it on desktops at all, such systems should not be
entropy-starved. Do we need this at all now that the kernel can use
RDRAND itself?

rngd runs as regular system service, hence what's the point of that
altogether? I mean, it runs so late during boot, at a point where the
entropy pool is full anyway, and we need the kernel's RNG much much
earlier already (already because systemd assigns a uuid to each
service invocation that derives from kernel RNG, and it does that
super early). So, why run a service that is supposed to fill up the
entropy pool at a point where we don't need it anymore, and if the
kernel can do what it does most likely already on its own?

Isn't it time to kick rngd out of the default install, in particular
on the workstation image? Isn't keeping it around just cargo culting?

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Robert Marcano at 04/22/2019 - 08:35

On 4/17/19 4:38 AM, Lennart Poettering wrote:
Non developers, true. Developer's workstations, wrong. Just signing a
few packages (java's jarsigner) to test your code runs fine under those
conditions can drop to near zero the entropy, taking a lot of time to
finish the signing.

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/23/2019 - 06:09

Well, "jarsigner" is broken then. It appears to use /dev/random
instead of /dev/urandom. if you use the latter, then you can pull out
as much randomness as you want, it's not affected by "entropy
depletion".

See man page about that:

<a href="http://man7.org/linux/man-pages/man4/urandom.4.html" title="http://man7.org/linux/man-pages/man4/urandom.4.html">http://man7.org/linux/man-pages/man4/urandom.4.html</a>

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Steve Grubb at 04/17/2019 - 10:55

On Wednesday, April 17, 2019 4:38:18 AM EDT Lennart Poettering wrote:
The kernel uses RDRAND/SEED but it does not increment the entropy estimate
based on it. Another interesting thing is that TPM chips also have entropy
available, but the kernel does not use it. So, if you have a hardware based
entropy source such as TPM, you need rngd to move the entropy to the kernel.
And it also can mine CPU jitter to create some entropy on its own. And it
also supports the NIST beacon if you want that kind of entropy. Rngd greatly
helps system recover from low entropy situations.

I'd really like to see it start much earlier. Any way to make that happen?

The kernel cannot recover quickly when stressed for continued entropy
depletion. For example, we are required to be able to supply all guest VM's
with entropy from the host. They draw down the entropy pools which need
replenishment. The kernel is constantly starved for entropy.

I think you're being harsh without really looking deeply into the problem. If
we could set a sysctl to tell the kernel to use a TPM or increment entropy
estimate when RDSEED is used, I'd agree we should consider this. And to be
honest, it should be running during an anaconda or kickstart install in order
to safely setup an encrypted disk. Also, livecd uses are starved for entropy
and must use rngd to be responsive and safe. If you have a TPM, the best use
you'll get out of it is providing random numbers via rngd. :-)

-Steve

Re: Can we maybe reduce the set of packages we install by defaul

By Martin Kolman at 04/18/2019 - 07:39

On Wed, 2019-04-17 at 10:55 -0400, Steve Grubb wrote:

Re: Can we maybe reduce the set of packages we install by defaul

By Daniel P. Berrange at 04/18/2019 - 05:06

On Wed, Apr 17, 2019 at 10:55:58AM -0400, Steve Grubb wrote:
The recommendation is for virtio-rng to be backed by /dev/urandom
these days, so you won't deplete the host /dev/random anymore.

Regards,
Daniel

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/17/2019 - 13:36

That's not true anymore. There's a kernel compile time option now for
that in CONFIG_RANDOM_TRUST_CPU=y. And yes, the Fedora kernel sets
that since a while.

Yeah, all that stuff is stuff the kernel could do better on its
own. If the CPU jitter stuff or the TPM stuff is a good idea, then why
not add that to the kernel natively, why involve userspace with that?
i.e. if the TPM and the CPU jitter stuff can be trusted, then the same
thing as for CONFIG_RANDOM_TRUST_CPU=y should be done: pass the random
data into the pool directly inside in the kernel.

Well, no. I mean, the only way you can do that is by turning rngd into
its own init system, if you want it to run before the init
system.

That's not how the entropy pool works. Once it is full it's full, and
it doesn't run empty anymore.

OK, so I guess that point in time is now. Though it's not a sysctl,
but a compile time option (see above).

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Chris Murphy at 04/17/2019 - 18:05

On Wed, Apr 17, 2019 at 11:36 AM Lennart Poettering
< ... at 0pointer dot de> wrote:
$ grep CONFIG_HW_RANDOM_TPM /boot/config-5.0.6-300.fc30.x86_64
CONFIG_HW_RANDOM_TPM=y

I've got no idea if this is for TPM 1.x or 2.x or both.

/usr/lib/systemd/system/rngd.service contains

WantedBy=multi-user.target

I'm gonna guess Steve Grubb is wondering whether it could be wanted by
an earlier target, possibly cryptsetup-pre.target? I don't see a
service file in the upstream project so this may have been selected by
the Fedora packager as a known to work option.

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/18/2019 - 04:47

So apparently, since a long time the kernel actually could push data
from hwrngs into the kernel pool while crediting entropy:

<a href="https://lkml.org/lkml/2018/11/2/193" title="https://lkml.org/lkml/2018/11/2/193">https://lkml.org/lkml/2018/11/2/193</a>

i.e. it's the "rng_core.default_quality=700" switch on the kernel
cmdline.

It sounds like that option is just something that needs a compile time
option that Fedora could just turn on.

Quoting from that mail: "This is better than relying on rng-tools."

WantedBy= doesn't really say much about when something is started,
just about what wants it started. It's not about ordering, it's about
requirement.

If you want to order it early then set DefaultDependencies=no and use
Before= some appropriate unit.

But this is all pretty much pointless, since PID 1 (systemd) itself already
needs entropy, and thus starting this after PID 1 is useless.

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Steve Grubb at 04/17/2019 - 15:14

On Wednesday, April 17, 2019 1:36:08 PM EDT Lennart Poettering wrote:
Ah...the devil is in the details. It does not credit entropy. This can easily
be tested. systemctl stop rngd. Then open 2 terminal windows. In one terminal
start this shell script:

#!/bin/sh

while [ 1 ]
do
/bin/cat /proc/sys/kernel/random/entropy_avail
sleep 1
done

Then in another:

cat /dev/random >/dev/null

After a couple seconds, hit ctl-c to kill cat. Watch what happens to the
entropy.

I have a Kabylake system idling. It takes 3 minutes for entropy to get back
to 3k after stopping the consumer. At that point its losing about as much as
its gaining. If I start rngd and do the same test, my entropy bounces back to
over 3k in less than a second. As it stands today, rngd has a dramatic effect
on entropy.

Many have tried to convince upstream about this. If anyone here has influence,
please try.

I agree. :-)

And credit entropy!

Empirical evidence suggests otherwise. See the test above.

It looks as though it may be controlled as a boot commandline option, too.
But that is likely intended to disable the effect it has.

-Steve

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/18/2019 - 04:28

Well, don't use /dev/random. Use /dev/urandom. The official
documentation declares /dev/random a "legacy interface".

<a href="http://man7.org/linux/man-pages/man4/random.4.html" title="http://man7.org/linux/man-pages/man4/random.4.html">http://man7.org/linux/man-pages/man4/random.4.html</a>

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By stan at 04/17/2019 - 16:33

On Wed, 17 Apr 2019 15:14:54 -0400

I run a daemon that harvests entropy from the atmosphere via an rtl2832
and feeds it into the kernel via /dev/random. And, yes it makes a big
difference to feed the entropy pool.

When random.c was rewritten to use chacha instead of the modified
mersenne twister (4.xx?), the way entropy was used changed. It used
to bleed constantly across from the pool that is /dev/random
into /dev/urandom when it was above the threshold set in
write_wakeup_threshold. Now, it only checks when the kernel routine
for get_random is called and reseeds if enough entropy is present. It
always has decremented and still decrements entropy available when
it uses some. Under mersenne it used to be possible to set a timer as
well, but that went away with chacha. I patch to enable that feature
in the new random.c, so I can reseed the chacha on a periodic interval.

The rationale for chacha was that server farms were starving for
entropy, and it is considered more robust for low entropy conditions,
at least that is what I understand from my reading.

As far as the CPU hardware entropy generators, those are not open
source, so it is not possible to determine if they have a
backdoor. Research has shown, however, that if any true entropy is
fed into a stream with compromised entropy, it results in a stream with
better entropy. That is, a system using a compromised hardware
generator will have more robust entropy when combined with other
sources of entropy. The kernel does this via a hash to smear the mix.
An attacker can no longer utilize an attack knowing the bits came from
the compromised generator.

Things like the bit bubbler are reasonably cheap (~100 dollars US) and
provide enough entropy for a small server farm. Even the rtl2832 (~10
dollars US) provides about 90 Kbytes of entropy per second (the kernel
entropy pool is 4kB). Not enough for monte carlo simulations, but plenty
for a home system or a few servers.

Re: Can we maybe reduce the set of packages we install by defaul

By Simo Sorce at 04/17/2019 - 15:25

On Wed, 2019-04-17 at 15:14 -0400, Steve Grubb wrote:
If upstream is currently resistant, what about turning rngd into a
loadable kernel module and then insure it is in the initramfs and
loaded at kernel boot time ?

Would this be a way to show upstream that this works and perhaps allow
inclusion later on ?

Simo.

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/18/2019 - 04:53

So apparently the kernel can do both the RDSEED/RDRAND stuff already
on its own (and this is turned on in Fedora) and also can credit
entropy based on other hwrngs too (see other mail). The latter is a
bit awkward since it requires a kernel cmdline option currently to
enable, and is global for all drivers though it would probably be wise
to enable this individually for each driver judging by how much the
device is trusted or not.

(Also note that virtio-rng is something systemd automatically loads if
it's not around but the environment would support it, and it appears
to credit entropy too.)

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By J.C. Cleaver at 04/17/2019 - 14:29

On 4/17/2019 10:36 AM, Lennart Poettering wrote:
This seems like a false dichotomy, no? Surely, things like this are a
possibility:
<a href="https://lists.freedesktop.org/archives/systemd-devel/2010-September/000225.html" title="https://lists.freedesktop.org/archives/systemd-devel/2010-September/000225.html">https://lists.freedesktop.org/archives/systemd-devel/2010-September/0002...</a>

But beyond that, is there really no way to lift this earlier in the boot
logic?

-jc

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/18/2019 - 04:22

That too means the service gets started after the init system is up,
and the init system already requires entropy, so it's pointless.

Sure, you can invoke rngd before systemd, in which case it would have
to be able to run as PID 1 itself pretty much and then hand over
things.

But why do that in userspace at all? the "Trust CPU RNG" kernel
compile time option shows that these things are trivial to solve if
people just want to. Instead of involving rngd at all, why not add a
similar option for the TPM RNG (or any other non-CPU hw rng) and then
rngd doesn't do anything useful anymore whatsoever? I mean, to my
knowledge all those other RNGs already feed into the pool anyway, they
just don't get trusted and thus don't add to the entropy
estimate. Fixing that should be quite doable and given that
CONFIG_RANDOM_TRUST_CPU exists now it shouldn't be politically too
hard to argue for a CONFIG_RANDOM_TRUST_TPM either...

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Nikos Mavrogian... at 04/24/2019 - 06:02

On Thu, Apr 18, 2019 at 10:23 AM Lennart Poettering
< ... at 0pointer dot de> wrote:
I like the part that this is trivial to solve if people want to.
Making people agree is an order of magnitude harder than fixing any
code. Nevertheless, without rngd, getrandom() would block in one of
the first services started by systemd (if it doesn't block in systemd
itself). The kernel option CONFIG_RANDOM_TRUST_CPU, is not portable so
you'll need something more for non-x86. What rngd does that the kernel
doesn't is a jitter entropy "subsystem" which will feed the kernel
with random data, even when the hardware doesn't support something
native.

Can the jitter entropy gather be done by the kernel? It seems yes via
the jitterentropy_rng module. So a combo of CONFIG_RANDOM_TRUST_CPU
and the jitterentropy_rng may help in simplifying fedora (if people
agree :).

regards,
Nikos

Re: Can we maybe reduce the set of packages we install by defaul

By Simo Sorce at 04/24/2019 - 11:19

On Wed, 2019-04-24 at 12:02 +0200, Nikos Mavrogiannopoulos wrote:
This sounds like a useful change, can we make Fedora load this module
by default in initrd before systemd starts?
Will it help?
Or is this module not adding into the entropy estimate as well ?

Simo.

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/24/2019 - 06:23

As mentioned before: systemd itself already needs entropy itself (it
assigns a random 128bit id to each service invocation, dubbed the
"invocation ID" of it, and it generates the machine ID and seeds its
hash table hash functions), hence rngd doesn't cut it anyway, since it
starts after systemd, being a service managed by systemd. If rngd was
supposed to fill up the entropy pool at boot, it would have to run as
initial PID 1 in the initrd, before systemd, and then hand over to
systemd only after the pool is full. But it doesn't, hence rngd is
pointless: it runs too late to be useful.

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By =?iso-8859-1?q?... at 04/24/2019 - 11:28

Lennart Poettering wrote:
Given that access to entropy during early boot is so problematic,
hardware-dependent and full of catch-22s, it seems to me that an init
system should use the entropy pool only if it really must.

With that in mind, could you explain why the invocation ID and the hash
tables need to be cryptographically secure? Why is rand or a simple
serial number not good enough? I never heard that lack of a
cryptographically secure invocation ID was a big security problem
before SystemD.

Björn Persson

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/25/2019 - 05:35

init systems before systemd had no notion of "sevice lifecycles", they
just didn't care. systemd cares about lifecycles however, and assigns
a random 128bit id (aka "uuid") to each service invocation cycle. This
can be used to associate logs and resource data of a specific service
invocation with each other.

To be suitable for their purpose, of being *universally unique* you
need a good random source for them. You cannot just use srand() and
start from a fixed or time source, because then every system would
always generate the same uuids...

Naive hash tables are prone to collision attacks: if you know the hash
function used by the hash tables you might be able to trigger a DoS by
forcing collisions and thus degrading the assumed O(1) complexity of
hash table operations to O(n). See bug report how this was exploitable
in Perl hash tables for example:

<a href="https://rt.perl.org/Public/Bug/Display.html?id=22371" title="https://rt.perl.org/Public/Bug/Display.html?id=22371">https://rt.perl.org/Public/Bug/Display.html?id=22371</a>

Because of that modern hash table implementations (including those in
systemd) will use a keyed hash function and pick a random value as
seed every now and then, so that clients cannot easily trigger DoS
like that. This random seed needs to be of relatively high quality,
since if clients could guess the seed the excercise would be
pointless. Hence no, srand() from timer or constant value wouldn't cut
it. But do note that systemd doesn't use blocking getrandom() for
seeding hash tables, but uses /dev/urandom instead (i.e. is happy with
an uninitialized entropy pool), because for the hash table collisions
it's fine if we initially don't have the best entropy as long as it
gets better over time. That's because the hash tables in systemd will
monitor the fill level and rehash with a fresh seed if we hit a
threshold.

Anyway, there are a number of other places systemd needs a bit of
entropy, these are just two prominent cases.

We also make use of /proc/sys/kernel/random/boot_id btw, which also
needs some entropy.

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Stephen John Smoogen at 04/24/2019 - 06:40

On Wed, 24 Apr 2019 at 06:24, Lennart Poettering < ... at 0pointer dot de>
wrote:

There are several solutions to try here:
1. Make something like it run sooner so it helps your problems
2. Add something like it into the kernel (which has been a Sisyphus task
from what i can tell)
3. Pull it into systemd so it helps your problems and others.
4. Keep this thread going with everyone talking past each other.

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/24/2019 - 08:25

but it can't be. it's logically impossible. let me explain this again:

1. systemd needs entropy to start services and other purposes
2. if the entropy pool is not filled up systemd thus might need to
wait for it to fill up, in a blocking fashion. When it blocks for
that it won't start any services until it unblocks again.
3. rngd is supposed to fill up the entropy pool, thus allowing systemd
to unblock and start the first services
4. rngd runs as regular service however, i.e.

And ther you have your ordering cycle:

a. systemd starts before rngd.
b. rngd runs before the entropy pool is full.
c. the entropy pool needs to be full for systemd to start

a before b before c before a before b before c before a? How's that
solvable?

So if you want rngd to stay and do something useful, then it needs to
be modified to start *before* systemd, in the initrd, before systemd
is invoked. i.e. not as regular service, but as kind of an init before
the real init.

The current mode is just entirely bogus...

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Adam Williamson at 04/24/2019 - 11:27

On Wed, 2019-04-24 at 14:25 +0200, Lennart Poettering wrote:
This is all based, though, on your expectation that everything uses
non-blocking interfaces, right? For anything that *does* use
/dev/random or blocking getrandom() - which absolutely does happen,
even the docs say it's deprecated - rngd is still useful.

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/25/2019 - 05:07

Well, the fix for that is probably not to clutter the system with rngd
though. Patching /dev/random out, and patching /dev/urandom into
those packages shouldn't be that difficult. It's low-hanging
fruit. Very low-hanging in fact, you don't get to fix bugs that often
by inserting a single character in your sources... ;-)

I mean, how is this ever going to be fixed if not by simply dropping
rngd from the default install and then fixing everything popping up?
You can't fix these things any other way, it doesn't work in
real-life.

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Stephen John Smoogen at 04/24/2019 - 09:03

On Wed, 24 Apr 2019 at 08:26, Lennart Poettering < ... at 0pointer dot de>
wrote:

Let us look at it as a plumbing issue. We currently have a building with a
bunch of pipes with small feeds and you as the morning janitor come in
first of the day to wash the floors and clean things so other people can
get to work. To fill your buckets you need a big basin to start up and have
to instead wait around as the pipes fill up your cleaning bucket. You look
around and see that people installed various buckets and pots to act as
basins in their rooms they use to wash their hands and fases with but you
can't use them as they need to be cleaned first. No one sees your problem
because by the time their day starts.. you have been in there for hours and
got your drip drip going and done your work. The problem here is that how
you have come across is "Well I need more water, so we should rip out all
the basins until I get one too. Just use this mopbucket water like I do."

I don't know if that is what you are meaning to say or not. If it isn't
then I am just trying to explain why people are 'reacting' versus 'fixing'.
Yes the problem needs to be solved sooner in the chain. You need a proper
basin to fill up water in. In fact we all need proper plumbing which helps
each service we are running. Working out how to get it is what we should be
doing but instead we are arguing over who is going to go on strike first to
get it.

So if you want rngd to stay and do something useful, then it needs to

Re: Can we maybe reduce the set of packages we install by defaul

By Nikos Mavrogian... at 04/24/2019 - 06:37

On Wed, Apr 24, 2019 at 12:24 PM Lennart Poettering
< ... at 0pointer dot de> wrote:
The goal of running rngd early was to have the system boot, not
necessarily to address systemd's need for random numbers. In that it
is successful. I do not disagree that it is not a clean solution.

regards,
Nikos

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/24/2019 - 08:16

But how can it be successful? If systemd already needs to wait until
the pool is full to get the randomness it needs (and thus blocks
system boot-up as a whole) then what's the point in running rngd
afterwards? To reach the point where rngd can be run we already need
the pool to be full, and hence rngd can't do any good at all anymore,
whatsoever.

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Tomas Mraz at 04/24/2019 - 11:43

On Wed, 2019-04-24 at 14:16 +0200, Lennart Poettering wrote:
What does systemd use to generate these random numbers? Does it
directly call getrandom() or does something else?

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/25/2019 - 05:14

Depends.

For the invocation IDs we use getrandom() with default args
(i.e. blocking behaviour). Similar for all other cases where we pick
128bit random identifiers (also known as uuids).

For the hashtable seeds we use classic /dev/urandom (i.e. entropy from
a possibly non-initialized pool) since it's OK if those seeds are
crappy initially, as long as they get better over time, since we
reseed if we see too many hash collisions.

We never use /dev/random or GRND_RANDOM.

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Przemek Klosowski at 04/25/2019 - 13:14

On 4/25/19 5:14 AM, Lennart Poettering wrote:
I thought that hashing would be fine with a completely predictable
generator, as long as the sequence itself is not correlated,  i.e. it
would be OK if the sequence used for hashing was the same on every system.

Of course that particular sequence might lead to collisions, but then
another uncorrelated but completely predictable sequence should fix
that. In other words, it could be seeded from a constant table like
[1,2,3,4,.....], just as well as from /dev/urandom regardless of its
entropy.

My point here is that actual entropy of the seeding is irrelevant, at
all times---would you agree?

That leaves the invocation IDs---the UUIDs need to be random to be truly
Universally Unique, but  a limited entropy system is implicitly
isolated, so maybe the limited UUIDs could be seen as Universal in its
very small Universe. What is the time duration of the original
invocation IDs? What are the negative implication of the initial UUIDs
being less random than the subsequent ones?

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/25/2019 - 13:43

No, because then I can calculate in advance which hashes the target
system uses and this still trigger the collisions. The seed hence must be
hard to guess from the outside, and thus cannot follow a predictable scheme.

No, I would not agree.

Invocation IDs are useful for globally pinpointing a specific service
invocation. If the UUIDs would stop to be truly random then they'd
stop being universally unique and thus stop being useful.

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By =?iso-8859-1?q?... at 04/25/2019 - 18:10

Lennart Poettering wrote:
It's perfectly possible for a number to be unique without being random.

As an example, you could hash the machine ID, which is supposedly
unique in space, and the system clock, which is unique in time. That
makes the hash unique in both space and time. Produce invocation IDs by
counting up from that value, or by hashing repeatedly. That way you
wouldn't need entropy for invocation IDs at every boot, only during
installation.

Such values would of course be somewhat predictable, but according to
what you've said in this thread, invocation IDs don't need to be
unpredictable. You've only said that you want them unique.

(Of course one needs to be aware that collisions are not impossible,
only improbable. That's equally true for hashes and random numbers.)

Björn Persson

Re: Can we maybe reduce the set of packages we install by defaul

By Przemek Klosowski at 04/26/2019 - 16:15

On 4/25/19 6:10 PM, Björn Persson wrote:
That is a good point---and by the way, you COULD make the same argument
for hashing: one could create another installation-time seed value that
will be guaranteed to not leak from the system, and mix it in the hash
creation, making the hash unpredictable.

Between those two workarounds, it looks to me like we don't need
randomness in secular systemd startup at all?

At the UUID-level bit lengths, the probability is vanishingly
small---although one does have to realize that even very small
probability events can be realized with enough statistics, like in this
recent measurement of Xenon124 radioactive decay with time constant of
over 10^22 years, trillion times longer than the life of the Universe:

<a href="https://www.nature.com/articles/s41586-019-1124-4" title="https://www.nature.com/articles/s41586-019-1124-4">https://www.nature.com/articles/s41586-019-1124-4</a>

Re: Can we maybe reduce the set of packages we install by defaul

By stan at 04/18/2019 - 12:16

On Thu, 18 Apr 2019 10:22:27 +0200

On shutdown the existing entropy is stored for use at startup (it is
still entropy on restart if an attacker hasn't seen it). So, if init
uses that entropy and depletes it, it would be a positive to restore it
as soon as possible. But that is the purpose of having the CPRNG be
robust. The seeds from last run are used until there is enough new
entropy to reseed, and in the meantime the CPRNG provides the 'random'
numbers. For most purposes this is more than adequate. If the CPRNG
can be cracked with the short stream fed from it to start up the
system, it is not robust. And that assumes that an attacker has
complete access to that stream.

The main threat from random numbers is that 'password' is a perfectly
legitimate random string, so even hardware random number generators can
generate easy to crack strings of numbers. That is, random doesn't
mean strong cryptographically. What we really want are
unpredictable crpytographically strong numbers. And that is also why
it is so hard to validate random number generators; if they aren't
failing the tests occasionally, they aren't random. But if they are
failing the tests occasionally, it might or might not indicate they are
not random, and thus predictable. And we might not even be testing
the right thing! In any case, a good reason to randomly change seeds to
CPRNGs regularly, if possible. Even a vulnerable CPRNG with a long
stream required to crack it is robust if reseeded constantly at short
intervals with randomness (the strategy of the linux kernel).

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/23/2019 - 06:07

That is pretty late: it's systemd-random-seed.service that does that
and it runs after /var is mounted writable, which is relatively late
in the early-boot phase. Moreover we don't credit entropy when writing
the seed back into the kernel, since it's not safe to do so in the
general case, as people frequently deploy the same pre-built image on
multiple systems and tend to forget to invalidate the saved seed
then. And all images that come up with the same saved seed would have
the same entropy pool initially hence the excercsie would be
pointless.

There has been work on making this opt-in
(<a href="https://github.com/systemd/systemd/pull/10621" title="https://github.com/systemd/systemd/pull/10621">https://github.com/systemd/systemd/pull/10621</a>) but this has stalled
since. If anyone wants to resurrect that, please do.

However, regardless whether s-r-s.s credits entropy or does not: it
runs too late: there are plenty entropy users running before that that
need to wait for the pool to be filled. And we can't really move
s-r-s.s earlier.

[And also: the concept of "depleting" the entropy pool is a
misconception. This doesn't happen if people use the APIs correctly,
i.e. /dev/urandom instead of /dev/random (or their getrandom()
equivalents). The kernel documentation calls /dev/random a "legacy
interface" for a reason (see
<a href="http://man7.org/linux/man-pages/man4/urandom.4.html" title="http://man7.org/linux/man-pages/man4/urandom.4.html">http://man7.org/linux/man-pages/man4/urandom.4.html</a>). Once the entropy
pool is filled it is filled for good, if /dev/urandom is used.]

(BTW: in case you wonder why we wait for /var being writable before
s-r-s.s is run: that's because we need to invalidate the old stored
seed when you use it, so that it is never reused again. This means we
need to overwrite the seed file when we use it.)

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Jason L Tibbitts III at 04/17/2019 - 14:01

LP> That's not true anymore. There's a kernel compile time option now
LP> for that in CONFIG_RANDOM_TRUST_CPU=y. And yes, the Fedora kernel
LP> sets that since a while.

Isn't this arch-dependent?

config RANDOM_TRUST_CPU
bool "Trust the CPU manufacturer to initialize Linux's CRNG"
depends on X86 || S390 || PPC
default n

Not sure what happens on ARM but I think it would need to be considered.

- J<

Re: Can we maybe reduce the set of packages we install by defaul

By Lennart Poettering at 04/18/2019 - 04:13

Yes it is. But so is rngd afaik? It uses the RDTSC, RDSEED and TPM RNG
iiuc and those are either x86 specific (in case of RDTSC/RDSEED) or
pretty much so (in case of the TPM RNG).

Lennart

Re: Can we maybe reduce the set of packages we install by defaul

By Jason L Tibbitts III at 04/18/2019 - 11:44

LP> Yes it is. But so is rngd afaik?

The software isn't exclusive to any particular architecture, though it
may of course have different sources of entropy on different
architectures.

- J<

Re: Can we maybe reduce the set of packages we install by defaul

By Simo Sorce at 04/17/2019 - 13:41

On Wed, 2019-04-17 at 19:36 +0200, Lennart Poettering wrote:
Big +1, I've been saying this for ages as well ...

I concur,
I would really like to see rngd become a thing of the past as well.
The kernel has all the tools and access needed to reseed itself,
*requiring* a racy userspace tool to do the kernel's job is a bit
ridiculous.

Simo.

Re: Can we maybe reduce the set of packages we install by defaul

By Daniel P. Berrange at 04/17/2019 - 05:09

On Wed, Apr 17, 2019 at 10:38:18AM +0200, Lennart Poettering wrote:
IIUC, RDRAND exists from IvyBridge generation CPUs onwards for Intel
or EPYC CPUs for AMD. I've no idea what the story is for non-x86 CPUs
& RDRAND equivalent.

Anyway, whether we can rely on RDRAND depends on what we consider the
minimum targetted CPU models & architectures. I'm guessing that we
do intend Fedora to work correctly in CPUs predating/lacking RDRAND.

KVM guests can have a virtio-rng device provided on any architecture,
which feeds from host's /dev/urandom, but it is unfortunately fairly
rare for public cloud providers to enable it :-(

rngd includes support for the "jitter entropy" source which uses
CPU jitter to feed the RNG. At least in RHEL, this is the recommended
option when the CPUs lack RDRAND or equivalent and is why rngd
is enabled by default there. IIUC it is reading the jitter entropy
from the kernel's crypto APIs, optionally applying AES to data, and
then feeding it back into the kernel's rng pool.

/dev/random can get depleted after boot. Though modern recommendation
is for apps to use /dev/urandom by default (or getrandom/getentropy
syscalls), some probably still uses /dev/random for historical baggage
reasons.

Regards,
Daniel

Re: Can we maybe reduce the set of packages we install by defaul

By Colin Walters at 04/11/2019 - 12:48

On Tue, Apr 9, 2019, at 12:07 PM, Lennart Poettering wrote:
The dependency chain of libvirtd is just doomed from this perspective. For Fedora Silverblue (and FCOS) we make the intention decision not to include it by default (though I personally have it package layered).

(Using qemu inside a container without libvirt is also a nice pattern, we use this in <a href="https://github.com/coreos/coreos-assembler" title="https://github.com/coreos/coreos-assembler">https://github.com/coreos/coreos-assembler</a> )

Re: Can we maybe reduce the set of packages we install by defaul

By Daniel P. Berrange at 04/11/2019 - 13:29

On Thu, Apr 11, 2019 at 12:48:13PM -0400, Colin Walters wrote:
This is a rather sweeping inaccurate statement IMHO.

The scale of the dependancies you get from installing libvirt varies
significantly depending on which libvirt RPMs you choose to install
or depend on. There's quite alot of modularization there if you pick
the right sub-RPMs to minimize install footprint. In addition some
of the footprint you get when installing libvirt is actually coming
from QEMU itself.

eg starting from the fedora:30 docker image

* "libvirt-daemon-driver-qemu"

The bare minimum currently needed by the libvirt QEMU driver impl.

56 RPMs / 100 MB

* "libvirt-daemon-kvm"

All functionality usable in combination with libvirt and KVM. Also
pulls in the qemu-system-XXXX to match your host arch.

300 RPMs / 430 MB

Of this, 211 RPMs / 300 MB is due to qemu-system-x86 & qemu-img
RPMs, rather than libvirt itself. So real libvirt overhead here
is only 90 RPMs / 120 MB

* "libvirt-daemon-qemu"

All functionality usable in combination with libvirt and QEMU (any
arch emulation). Pulls in every qemu-system-XXX RPM

350 RPMs / 1 GB

(The extra delta here is really coming from
QEMU not libvirt itself)

The first libvirt-daemon-driver-qemu RPM should in fact be even smaller
than it is, but we have an accidental dependancy between two parts of
libvirt codebase. This will be addressed in F31.

Regards,
Daniel

Re: Can we maybe reduce the set of packages we install by defaul

By Paul W. Frields at 04/11/2019 - 12:09

On Tue, Apr 9, 2019 at 12:07 PM Lennart Poettering < ... at 0pointer dot de> wrote:
[...]
Although somewhat orthogonal to your notes below, overall there's a
lot of package-entangling in the basic platform underlying the
Workstation as well. This is something we should look at if we're to
make progress in CI and Lifecycle objectives -- i.e. being able to
produce basic platform for integration more quickly. I was talking to
contyk about this the other day and we are starting to throw some
ideas around about that. Again, doesn't solve all your individual
concerns below but at least related. A good portion of the other
subthread is really about choices made and how we enable bits properly
for something like Workstation, which is also valid but a different
effort I think.

[...]
Interestingly I think Google Chrome needs this when it installs,
though it seems nonsensical to me. (Chrome is installed by about 50%
of our users given some informal stats, so writing it off would be
shooting ourselves in the foot.) That's something the Workstation
folks may want to work with them to fix in a more systemd-ish way.

Re: Can we maybe reduce the set of packages we install by defaul

By Dominik 'Rathan... at 04/12/2019 - 05:35

On Thursday, 11 April 2019 at 18:09, Paul Frields wrote:
Chrome doesn't require atd explicitly (nor is it pulled in by any of its
dependencies).

It does use it in %post to sneak in a cron job to to add a repo config
file and its GPG key trust behind your back:

service atd start
echo "sh /etc/cron.daily/google-chrome" | at now + 2 minute > /dev/null 2>&1

So, actually not having atd installed won't break Chrome as it will
just ignore the 'at' command execution error due to 'exit 0' a few
lines below it.

Regards,
Dominik

Re: Can we maybe reduce the set of packages we install by defaul

By Chris Adams at 04/12/2019 - 08:47

Once upon a time, Dominik 'Rathann' Mierzejewski < ... at greysector dot net> said:
That's incorrect. The Google Chrome RPM requires /usr/bin/lsb_release,
which is from redhat-lsb-core, and that requires /usr/bin/at.