Problems with infrastructure

Hi all,

It has come to my attention that some developers have "issues" with
KDE infrastructure in certain areas. This is the first time i've heard
of these "problems" and to my knowledge nobody has ever spoken to
sysadmin regarding them.

If people have an issue, can they please actually raise it so we can
discuss the matter and reach an agreement on something which would
sort out your problem - and might actually help others too?

Ben Cooksley
KDE Sysadmin


Re: Problems with infrastructure

By Christian Mollekopf at 12/10/2014 - 05:28

On Wednesday 10 December 2014 15.27:31 Ben Cooksley wrote:
That might have been me, since I recently used github for some repos/branches
that could have ended up on kde infrastructure. I did that mostly because I
thought it didn't matter, because I wanted to try something else, and because
I was too lazy to write this mail. So, since I used github a bit now, let me
share my whishlist with you =)

* deleting branches: This is the only major gripe I have with the kde
infrastructure. I think everyone should be able to delete branches (except
some blacklisted ones). If I cannot delete my branches when I no longer need
them I try to avoid pushing them, which doesn't help. Personal clones are not
a solution IMO because you have to manage additional remotes. IMO the benefits
outweigh the danger of someone accidentally deleting someone else's branch.
Perhaps a naming scheme could be established for such branches, such as:
dev/$foo or feature/$foo

* pull requests/the webinterface: reviewboard is awesome for single patches
every now and then, it's rather useless when you work with branches IMO. With
github we have a nice webinterface to review branches while keeping it simple.
Gerrit may be what I'm looking for though (never used it, looking forward to
see how the testing goes).

* The rest I suppose is mostly psychological:
** On github I can click my way to a new repo, on kde I have to lookup a
command (being able to do it from the commandline in the first place is a
benefit though)
** I find githubs webinterface prettier and more useful. I actually use it
sometimes, and I never use or If you're
interested I could try to figure out what it actually is that I like more,
because I can't really tell right now.

I know the last few are a bit silly but it's what could make the decision,
unless I make the conscious decision that I want it on kde infrastructure
because it's kde infrastructure (i.e. for community cohesion). This is quite
an abstract thought, that although we all should be able to make it, without
it it's a free market where sexyness may win ;-)

Anyways, thanks for doing a great job and caring, I'll try to be a bit more
helpful in the future as well.


Re: Problems with infrastructure

By Sebastian =?utf... at 12/10/2014 - 09:05

[I love our infrastructure, just this bit triggered my reply-to-email reflex]

On Wednesday, December 10, 2014 10:28:59 Christian Mollekopf wrote:
In Plasma, we usually name branches <username>/<topic>, so for example
sebas/breakpluginloader. It'd be supersweet if the user who "owns" this branch
would be able to delete it. I often leave these branches lingering for too
long before I ask notmart to delete them, so that would be a useful addition.

I quite like not being able to delete arbitrary branches.


Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/10/2014 - 11:53

On Wednesday, 10 December 2014 10:28:59 CEST, Christian Mollekopf wrote:
That depends on what you're looking for. Gerrit puts emphasis on review of
individual patches. It will present related patches "together", but you
won't be able to review them as a single entity -- the goal of Gerrit is to
ensure that the history is clean, and it is operating under an assumption
that each commit matters. Unless I'm mistaken, a GitHub pull request shows
you a diff which represents the starting point and the final point of that
branch as a single unit, with an option to go into individual commits.
Gerrit is different -- not worse, different :).

Regarding the "testing of Gerrit", I haven't received much feedback yet.
Three repositories are using it so far (kio, plasma-framework, trojita),
and if a repo owner/maintainer of some other project is willing to take
part in this testing, I won't see a problem with it.

With kind regards,

Re: Problems with infrastructure

By Albert Astals Cid at 12/10/2014 - 14:41

El Dimecres, 10 de desembre de 2014, a les 16:53:09, Jan Kundrát va escriure:
I see some problems with gerrit:

A) it makes our messaging more complex, we tell people to use reviewboard,
except for these 3 repos, which you have to use a different tool

B) As a "gardener" it makes my life harder, now I have to go through two
different patch tracking systems and see if people forgot to commit or review
stuff in two different systems.

C) In case I was a developer of some of those projects it would it harder for
me to also check what's the status of my patches since i would need to visit
two webs instead of one.

D) There's no way to create a review without using relatively unfriendly
gerrit process

A, B, and C are solved if gerrit is only in testing to eventually replace
reviewboard totally; but not if it is meant to coexist with reviewboard (which
would make some people happier but would in my opinion be negative for the
common good).

D is really important to me since it makes it harder to contribute to non
hardcore git users; it took me days to start understanding Qt's gerrit and i
am still not sure i understand it fully, with reviewboard i do git diff and
use the web to upload a patch, as simple as it gets.

And yes, i know people complain about reviewboard, but that is because it's
the tool we use, if we used gerrit, we would probably get complains too. I
want to make sure we're not investing time in what at the end is most probably
a zero sum height.


Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/10/2014 - 19:41

On Wednesday, 10 December 2014 19:41:31 CEST, Albert Astals Cid wrote:
Please take your time to try out KDE's Gerrit and don't judge it based on
your experience with Qt's Gerrit (in fact, try to forget that one if
possible). There's been more than two years of development which went into
the version which we use, and this effort is IMHO quite visible.

As a random data point, I've had two newcomers (one of them a GCI student)
submitting their first patch ever through Gerrit within 15 minutes after I
asked them to use Gerrit, with no prior experience whatsoever. I'm pretty
sure that the GCI students in general aren't considered an etalon of

Also, uploading a patch with Gerrit is a matter of `git push gerrit
HEAD:refs/for/master`. Are you suggesting that this is harder than `git
format-patch origin/master` and uploading the resulting file manually?

Right, I believe that one. As a project maintainer though, I can say that
Gerrit does make my life much easier -- being able to tests patch series
myself by a single `git pull` is *so* different to the experience of
fetching patches by hand from RB (and undoing the occasional breakage).
Also, there's no early CI feedback with RB, and nobody is neither working
on this, not had announced any plans to work on this topic during the past
years. That alone would change the ballance for me.


Re: Problems with infrastructure

By Albert Astals Cid at 12/10/2014 - 19:51

El Dijous, 11 de desembre de 2014, a les 00:41:56, Jan Kundrát va escriure:
Yes, it is harder.

Yyou need to setup git correctly, so that "gerrit" in that command is valid,
you need to understand you're pushing to a different server than the "real"
one, you need to commit (i never do format-patch, just git diff), all in all
needs you go have a bigger git-understanding.

Besides in reviewboard i could get a tarball, produce a diff and upload it
easily, i have not investigated Luigi's links yet, but "as far as i know" that
is not easy/doable in gerrit.


Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/10/2014 - 20:18

On Thursday, 11 December 2014 00:51:28 CEST, Albert Astals Cid wrote:
I see what you're saying, and you're probably right -- there's a bar,
indeed. That bar could however be effectively removed by having a
spoonfeeding, step-by-step documentation on how to submit patches with
Gerrit. I'm still hoping that I'm not the only guy who cares about this,
and that maybe someone else is going to produce such a howto (hint for
bystanders: now is the time, I've put quite a few hours into this already).

Furthermore, there are upstream patches under review for making it possible
to create a review through the web, with no use of CLI tools or a `git
push` equivalent of any sort. When these are released, I'll be happy to
upgrade, as usual.

Do we have some stats saying how many people download tarballs / zips from
ReviewBoard? Is there a User-Agent breakdown for the patch submission to RB
so that we could look on how many people do push files around, and can we
compare that to the number of people using rb-tools? I'll be happy to do
the number crunching myself, but I don't have access to the logs.

Anyway, I understand that my experience is probably going to differ from
the experience of anybody else to some extent, but to me, the hardest thing
in making a patch is usually finding out what changes to make in a random
C++ file of a project whose sources I'm seeing for the first time. Compared
to *that*, creating a git diff has always been much easier for me.

Moreover, when that patch is ready, someone still need to commit it and
make sure that it doesn't introduce any regressions. Right now, all parts
of this duty are effectively up to the project maintainer, which means that
the process doesn't scale at all. Unless the patch author is a KDE
developer already (in which case I fully expect them to be able to
copy-paste three commands from a manual to be able to push to Gerrit), a
project maintainer has to fetch stuff from RB by hand, copy-paste a commit
message, perform a build cycle, verify that stuff still works and upload
the result to git.

Considering a typical turnover time of patches which I see within KDE, I
don't think that we have superfluous reviewers or project maintainers, so
my suggestion is to prioritize making their workflow easier at the expense
of, well, forcing the contributors to spend their time copy-pasting a
couple of commands from a manual *once* during the course of their
involvement with a given project.

Anyway, I know that pre-Gerrit proccess was so painful for me that I
actually decided to invest dozens of hours of my time into this, and get
the Gerrit + CI ball rolling, and I'm not really willing to go back into
shuffling bits around by hand. This is 2014, and we have computers to do
repeated bits for us, don't we?

I've had my fair share of beers tonight, so I hope I won't manage to offend
people by my blutness here.


Re: Problems with infrastructure

By Ivan Cukic at 12/11/2014 - 04:13

An alternative to the setup documentation is to actually do it
automatically. A tool like kdesrc-build could set up the proper remotes
when checking out a project.

Re: Problems with infrastructure

By =?utf-8?Q?Thoma... at 12/11/2014 - 08:24

On Donnerstag, 11. Dezember 2014 09:13:23 CEST, Ivan Čukić wrote:
I assume that Jan and Albert are talking about different scopes here.

If you're a regular comitter to a project or want to add a feature patch, consisting of several commits and lasting weeks until finished, you'll have to use git (properly) and the *additional* barrier to use gerrit is quite neglectable ("git remote add gerrit <url>") and the benefit of gerrit (enforces proper git usage, "git push gerrit HEAD:refs/for/master" is from my personal experience superior to rbtools, where my only attempt ended in a disaster ;-) notable.

Otoh, Albert seems to think of patches like "typo", "i think this should be <=" or "this loop does not exit in all conditions".

For such things
1. cloning a repo
2. figuring how to add a remote (and which)
3. fixing the issue
4. commiting
5. pushing
will fail to my personal lazyness on step 1 already. I'd rather lookup the author and send him a mail (if this item actually bugs me)
I would actually not even want to use RB for this either and I can very much assume that "*sigh*, git" contributers (git is actually great, but it /has/ a steep learning curve) are not willing to go for 2,4 & 5 (or feel uncomfortable/unsure when comitting/pushing)

Ideally™, one could just annotate the sources in quickgit (where one probably looked up the code) and trigger a "review request" (of whatever kind) this way.


I don't like all things about the current gerrit webfrontend (notably the "open issue" handling after partial fixes), but that's no conceptual blocker from my pov.

Re: Problems with infrastructure

By Albert Astals Cid at 12/11/2014 - 18:20

El Dijous, 11 de desembre de 2014, a les 01:18:57, Jan Kundrát va escriure:
I think everyone will agree that we like automation.

You need to understand understand though that changing patch review systems is
not your decision to take (nor mine), we need to have a general
agreement/consensus when changing systems as important.

That you invested dozens of hours is great since it has made it possible for
others to try, but by itself stands no merit on if the software is better for
us or if it is not.

Best Regards,

Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/11/2014 - 20:16

On Thursday, 11 December 2014 23:20:59 CEST, Albert Astals Cid wrote:
Changing systems is not what I propose, though. What I'm arguing for is
empowering the individual projects to be able to choose tools which work
well for them. That's very different from saying "whole KDE should just
switch to Gerrit", and I'm not proposing that. Some people have made
themselves clear that no change is going to happen, and I can live with

I do happen to think that yes, switching to Gerrit would in fact indeed be
a good move for KDE as a whole, but sharing a view is something else than
making people change their systems. If you like RB and you're a project
maintainer, sure, by all means do use it for your projects -- I'm not going
to force you to switch for the sake of my pleasure or something similar.

I also admit that I would probably feel a little bit sad if Trojita ended
up to be the only project which sticked with Gerrit, but if that was the
general consensus of the community, who am I to dispute the wishes of
poeple doing the actual work?

I'm sorry if the impression which I managed to create by pointing out what
I perceive to be strong points of Gerrit and weak points of the
alternatives was something different.

With kind regards,

Re: Problems with infrastructure

By Albert Astals Cid at 12/12/2014 - 17:44

El Divendres, 12 de desembre de 2014, a les 01:16:14, Jan Kundrát va escriure:
That is something bad, where you see empowering i see needless fragmentation.
We need to move together otherwise things will get harder and harder to

Where was that discussed? Which people is that?


Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/13/2014 - 08:46

On Friday, 12 December 2014 22:44:39 CEST, Albert Astals Cid wrote:
(Removing PIM from the list, because I don't see this as a PIM matter.)

That was the impression which I got from the #kde-devel IRC channel and the
kde-core-devel ML right after that frameworks BoF during Akademy. When
re-reading the threads and the IRC logs today, I no longer have the
impression that there was a clear, absolute and strict "no", but there was
nonetheless IMHO quite a strong resistance to using something "as horrific
as Gerrit". That might explain why I think that there will be a subset of
people who won't be fine with any change, and because I respect their
opinion, I don't want to force such a change upon them.

So, basically, from my point of view -- the tools are here, the CI is done.
That CI bits in particular make the workflow much more appealing to me. Now
it's up to the KDE developers to come to a decision whether they want that
or not.

With kind regards,

Re: Problems with infrastructure

By Albert Astals Cid at 12/13/2014 - 13:13

El Dissabte, 13 de desembre de 2014, a les 13:46:24, Jan Kundrát va escriure:
As i said there is value in uniformity of the tooling, I like to think we're
all reasonable people and understand that if the majority thinks it's a better
tool, it makes sense to move to that tool. That's what happened with git.

And if after evaluating it, it doesn't make sense, we don't. That's what
happened with gitlab.

Now to me it seems that you're basically saying "you" do what you want, i'll
keep using "my" stuff. Which i find sad since it's creating artificial
barriers between "you" and "my" :/

It also puts the discussion about a possible switch to gerrit in a weird
situaion since we either all switch and have uniformity or we don't and then
we end up with reviewborad+gerrit :/

Maybe you could start a thread explaining why gerrit is better than
reviewboard nd why should we switch to it?


Re: Problems with infrastructure

By Kevin Kofler at 12/14/2014 - 21:58

Albert Astals Cid wrote:
Or we just stop the Gerrit experiment in the core KDE projects as a failure
(it was always made clear that it is only an experiment and can be ended at
any moment), and kick out Trojitá from KDE if Jan absolutely wants to use
Gerrit. (It's not even a KF5 or kdelibs application, but a Qt-only one.)
Then he can use whatever tools he wants. Problem solved.

Kevin Kofler

Re: Problems with infrastructure

By Luca Beltrame at 12/15/2014 - 02:32

Hello Kevin,

As Aleix said already, this does not help the discussion in any way. I can
see, even if I'm far from being an expert here, that Jan has been quite
constructive and the effort (adopted or not, it doesn't matter, as we're
looking an the intentions here) was to make software better.

Some may not agree with his choice of tools - but so far the discussion has
been quite level-headed, and technical. I say that this is a good
opportunity to evaluate what we have and what we need (Although "need"
changes from project to project) without any aggressive (and frankly,
hostile IMO) behavior.

What justifies such an aggressive tone? Don't forget that even if you're a
KDE contributor, you should always have the CoC in mind.

Re: Problems with infrastructure

By Martin Klapetek at 12/15/2014 - 06:16

On Mon, Dec 15, 2014 at 2:58 AM, Kevin Kofler <kevin. ... at chello dot at>
Our very own manifesto, which we've established not so long ago, does not
dictate that a project must be kf5 or kdelibs based application to be
considered a KDE project.

Also, this is a horrendous and concerning way of speaking, please don't do
that again.


Re: Problems with infrastructure

By Kevin Kofler at 12/15/2014 - 17:25

Martin Klapetek wrote:
But there *is* an expectation that the projects use KDE infrastructure, so
the implication in "I also admit that I would probably feel a little bit sad
if Trojita ended up to be the only project which sticked with Gerrit" (Jan
Kundrát) that it would keep using Gerrit no matter what KDE decides to use
officially does not fit into that. That creates the situation that "we
either all switch and have uniformity or we don't and then we end up with
reviewborad+gerrit" (Albert Astals Cid), which to me sounds a lot like
blackmail (of course not by Albert, he's just the messenger).

Kevin Kofler

Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/16/2014 - 06:28

On Monday, 15 December 2014 22:25:37 CEST, Kevin Kofler wrote:
I do not see anything wrong with using two different tools if the community
cannot agree on using a single one. Yup, if there was consensus, it would
be great to do it in a unified way, but that consensus apparently isn't
here now.

I fail to see how is this any attempt at blackmailing.


Re: Problems with infrastructure

By Ben Cooksley at 12/15/2014 - 23:12

Hi all,

Going to reply to all the various bits and pieces that have been
mentioned in order now. Apologies for the long mail.

For deleting branches, I think we can allow this - given some
protection for certain branches (like the KDE/* branches for
instance). Note that courtesy of the backup functionality in our
hooks, no branch or tag is ever truly deleted from a repository on I don't know how easy it would be to alter this behaviour
based on the account name of the developer.

In terms of tools: for both clear messaging, and to minimize
maintenance effort we need to have just a single tool. It isn't just
the tool itself which has to be maintained: we have commit hooks,
integration with other bits of infrastructure and so forth which also
needs to both be implemented and maintained. The more custom work we
have, the harder it is to upgrade things. We'll confuse newcomers if
projects A, B and C are reviewed on tool X while projects D, E and F
are reviewed on tool Y. A single tool would be best here. Let me make
clear that it is not a case of Reviewboard vs. Gerrit here - as other
options need to be evaluated too.

In regards to the difficulty of Gerrit - I tend to agree with this
argument (it took me at least a minute to find the comment button, and
I didn't even get to reviewing a diff). Plus there are major concerns
with integration into our infrastructure, such as pulling SSH keys
from LDAP for instance (no, you can't have the tool maintain them
itself - as mainline Git and Subversion need the keys too).

In terms of Reviewboard statistics - I don't have those to hand.
Someone would need to come up with some scripts which could be run
against the web server logs to generate some numbers here. We do have
the logs though.

Please note that any discussion of tools should be on the merits of
the tools themselves. Things like CI integration are addons, which
both Reviewboard and Gerrit are capable of. The only reason we don't
have Reviewboard integration yet is a combination of technical issues
(lack of SNI support in Java 6) and resource ones (some projects take
a long time to complete, and i'm concerned we don't have the
processing power).

In terms of a modern and consistent project tool - I agree here. A
long term todo item of sysadmin is to replace The
question is of course - with what. Chiliproject is now unmaintained,
so we do have to migrate off to another solution at some point. If the
new tool happens to be more integrated in terms of code review, that
is a bonus from my point of view (as it means the integration will be
better, and there is one less piece of infrastructure to maintain).

Thanks to Luca, we have a list of possible options to review. Contact
has already been made previously with the Gogs developers, so it is
possible they would be amenable to making changes necessary to support
what we need. Phabricator is also interesting, although we may have to
overcome some barriers with callsigns due to the sheer number of
repositories we have. It's code review tool is similar to Reviewboard,
but it has a much more sophisticated CLI client you can use. If anyone
has any other possible solutions, I would like to hear about them as

@Jan: could you please outline what you consider to be the key
advantages? At the moment I understand that you are after:

1) CI integration to pre-validate the change before it gets reviewed
2) Ability to directly "git pull" the patch (which Phabricator's arc
tool would meet I believe?)


Re: Problems with infrastructure

By Sebastian =?utf... at 12/19/2014 - 10:08

On Tuesday, December 16, 2014 16:12:05 Ben Cooksley wrote:
What I would find helpful, concretely:
- being allowed to delete "own" branches
- force push for "own" branches since I could delete and recreate a branch, I
may just as well directly be allowed to force-push, this makes it a bit
easier to keep branches as clean as possible

where "own" means "branchname starts with username/", for example


Re: Problems with infrastructure

By Kevin Kofler at 12/19/2014 - 21:08

Sebastian Kügler wrote:
Why can't we just allow delete and force-push of all branches for all KDE
contributors? Hasn't it always been KDE policy that contributors are trusted
not to do stupid things? Why do we need to enforce this through technical

Kevin Kofler

Re: Problems with infrastructure

By Albert Astals Cid at 12/20/2014 - 08:42

El Dissabte, 20 de desembre de 2014, a les 02:08:53, Kevin Kofler va escriure:
Yes, we trust contributors not to do stupid things, but there's a big gap
between that and letting people delete master branches of projects, we already
had those safe-guards in svn and having them in git makes sense too.

It's not that we think people is evil or stupid, it's just that we all make
mistakes and it is easier to put some basic safety checks instead having to do
all the work of recovering some very important branch that got deleted.


Re: Problems with infrastructure

By =?UTF-8?Q?Nicol... at 12/19/2014 - 21:24

2014-12-19 22:08 GMT-03:00 Kevin Kofler <kevin. ... at chello dot at>:
The way the hooks work now, I think we should allow delete and not
force-push, because deleting a branch stores a backup ref.

Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/16/2014 - 17:23

Hi Ben,

In case of Gerrit, there is no need for custom hooks as they stay on, and therefore I believe this point is not relevant to its
adoption. The whole setup has been designed and implemented in a way
suitable for long-term parallel operation alongside KDE's git.

As for the integration bits, they're done now. The tool just talks to LDAP,
and maintenance of that connection is effectively zero, unless a dramatic
revamp of our LDAP is planned. The repo mirroring was a matter of setting
up a single user account and configuring proper ACLs, and these are also
finished already.

I can understand the general reasons for limitting the number of services
which we offer and support. However, I would appreciate if we mentioned how
big these costs are, as there's some room of misinterpretation otherwise.

While true in general, I fail to see how it is relevant to Gerrit. What
custom bits are involved here?

I haven't received a single complaint from the Trojita GCI students about
any hardness in this. They did struggle with making an actual change to the
code, with proper coding style, with commit message formatting, with git in
general, they even failed to understand the written text about CCBUG and
BUG keywords in the our wiki, but nope, I haven't seen them struggle with a
need to use Gerrit or its differences to RB. YMMV, of course.

Because the majority of complaints actually came from people who are
well-versed with ReviewBoard, my best guess is that there's muscle memory
at play here. This is supported by an anecdote -- when I was demoing the RB
interface to a colleague who maintains Gerrit at $differentOrg, we both
struggled with finding buttons for managing a list of issues within RB.
It's been some time since I worked with RB, and it showed.

I remember having hard time grokking the relation between a "review" and
"attaching/updating a file" on RB. I didn't read the docs, and it showed.

I understand that people would like to avoid migrating to Gerrit if a
migration to a $better-tool was coming. Each migration hurts, and it makes
a lot of sense to reduce the number of hops.

However, what I'm slightly worried about is postponing Gerrit indefinitely
until all future replacements are evaluated. I don't see people putting
significant time into any alternative for code review right now. Do we have
any chances of these people making themselves known in close future? How
long would be a reasonable waiting time for a testing deployment of
alternate tools? When are we going to reduce our candidates just to the
contenders which have been deployed and tested by some projects?

The documentation, however, explains the functionality in a pretty clean
manner, see
<a href="" title=""></a> .

We also aren't the first project trying to work with Gerrit, so there's
plenty of tooling available right now, not "to be written". There's
text-mode interface, the "gertty" project, there's integration in
QtCreator, there are pure-CLI tools for creating reviews, another web UIs
in development, there are even Android clients.

Yes, SSH-keys-in-LDAP is a PITA, but given that one needs a patched OpenSSH
to look up keys from LDAP anyway, I don't think this is a blocker issue.
The situation is exactly the same with the Gitolite setup which currently
runs on git.k.o though, as that doesn't talk to LDAP either. As you
mentioned during our IRC chat, there's a Python daemon which polls for
changes in LDAP, and propagates these into Gitolite's config backend in a
non-public git repo. Why wouldn't this be enough for Gerrit, then?

Gerrit has both SSH-authenticated API and a REST HTTPS API for adding and
removing of SSH keys by an admin account. *If* this is needed, I'll be
happy to make it work, it's simply a matter of calling two trivial scripts.
Would you see any problems with hooking into the identity webapp or its
backend, if there's any, for this? An edge trigger would be cool.

Are there any other concerns?

With varying levels of ease, I should add. In the end, everything is
achievable, and you can write tools which automatically pull patches sent
through mailing lists and build that, but the question is who is going to
do the work, and when it's going to be ready. I know that the CI+Gerrit
thing is now done and solved, and I also know that I won't be spending my
time in redoing the same for ReviewBoard. Nobody bothered to do this
pre-merge CI for RB in the past years. Do we have some volunteers now?

And nobody caring enough to do the work, I suppose. AFAIK Jenkins runs on
Java 7 just fine, but apparently nobody found time for such an upgrade.
There's nothing bad with this, of course, but it doesn't suggest that
suddenly these people will have time for setting up early CI with RB.

This is not really an all-or-nothing question. If there's a project which
exceeds our current HW possibilities (you mentioned Calligra before,
right?) and we cannot easily get more HW (did someone ask the foundation's
treasurers for funds for HW rental, or approached some of the obvious
candidates such as RedHat or SuSE asking for HW access?), perhaps that
project can simply be omitted from these pre-merge CI runs.

We've chatted a couple of times about the limits of the current CI setup,
about its inability to perform checks in parallel. Is the existing
architecture going to scale with these pre-merge CI runs without
substantial changes?

Again, this is something which is solved now with the CI setup that is
behind Gerrit. The changes were into the glue code, the code which
schedules the builds, distributes the jobs and which decides when to build
what. It's still using the KDE CI's Python scripts for managing library
deps and for actually launching the build (I sent the necessary patches
your way).

See above for my view of mixing the quest for finding a decent project
management tool and for finding a good code review system. To go a bit to
the meta side, IMHO, a universal tool which does plenty of things in a
not-excellent manner is worse than using diverse set of tools which do one
thing each, and do it well. That's why I see a Chilliproject replacement to
be an orthogonal topic to the choice of a code review tool.

- With the CI actually testing not just the change in isolation (as a
result of it on top of what was at the repo at the time the change was
made), but the result of the change as applied to the current state of the
repo at merge time, and with user-visible possibility of retruggering a
check job.

- Being able to do "trunk gating" for projects which care enough. That is,
there's a tool which makes sure that there are no regressions, and which
won't let in commits which break the build or cause tests to fail. (And
yes, I know that there'll always be an option of direct pushes, I am not
pushing against that, it appears to be a point which people require. OK
with me.)

- Being able to do cross-project verification, i.e. "does this change of
kio break plasma-framework?"

- Performing builds on various base OSes, against the Qt version provided
by system (Trojita aims at supporting Qt 4.6-4.8 and 5.2+), using ancient
compilers (yes, C++11 and GCC 4.4 are so much fun when taken together) and
different mixes of optional dependencies and features to be built. In
short, testing in the env people will be using, including Windows and Mac

With Gerrit, one has an access to the full history of each change,
including accessing them in offline mode. This is opt-in, so people who
don't care will not have their clones "polluted by this nonsense", while
people who do care have them "enhanced by this valuable data". This happens
with no extra tooling. I'm sure I could come up with e.g. local scripts
that build these git refs from the (history of) patches on RB, Phabricator,
GitHub, Gitorious or whatever, but native support trumps scripting each

According to the docs, `arc` is something which just pushes patches around.
Working with patches is different from having a git ref to work on.

Have you ever used OpenSUSE's Build Service and its CLI client, the `osc`?
That's what happens when one tries to reimplement a SCM when working on a
tool whose primary focus is something else. My favorite misfeature is the
"expand and unexpand linked packages" thingy, and the associated hiding of
merge conflicts when a source package changes. That's not fun to debug, and
it won't ever happen with plain git because git comes with excellent
support for merges. That support is a result of many years of extremely
heavy use of git by a ton of developers. I don't expect Facebook to be able
to cope with *that* regardless of their engineering size, and therefore I
expect that `arc` will fail when people use it for non-obvious stuff.

I suppose most of our developers are either already familiar with git, or
they have to learn it anyhow to be able to participate in our community in
an efficient manner. Introducing another patch manager to the mix doesn't
help, IMHO. This is just to illustrate my experience with tools that
behave-quite-like-a-SCM but do not actually implement full SCM
functionality. It works well for quick demos, but it sucks when you start
using them seriously.

I would encourage anyone who evaluates these tools to pay attention to
these not-so-obvious usage issues. Having a CLI tool that can fetch a patch
and apply it to a local checkout is not equivalent to native git refs. It's
an important building block, but not a finished tool.

I know that I won't be in a business of building these tools. I'm quite
happy with having them out-of-box with Gerrit.


Re: Problems with infrastructure

By Jeff Mitchell at 12/16/2014 - 18:11

Don't want to weigh in on Gerrit as I don't know it well enough, but as
for Phabricator, Ben may have forgotten but we did evaluate it a while
back. It was neat but had a very serious problem: you needed an account
to even view anything (no public access), and once you got into it
everything was completely open. There was no real way not to give
everyone the keys to the kingdom. This was getting tracked in
<a href="" title=""></a> which is now marked as resolved. So
it may be a viable candidate again. But it's definitely a very
opinionated way of doing things.

To be honest, I actually think that this discussion needs to start with
a more fundamental question than the technical specifics of the various
tools, because at this point we probably have several potential tools
that can meet our needs. So the question I think is fundamental is: what
is our goal in a tool?

- If the goal is to make it comfortable to those that are used to and
like using GitHub, we should be looking at Gogs (I'm comfortable with Go
so could even code in some missing features, potentially).

- If the goal is to put code review front-and-center, mandatory even, we
should look at Gerrit.

- If the goal is to have a Phabricator-style hub (not sure what else to
call it), we should look at Phabricator.

I don't know the best way to choose from that list -- and this is part
of the reason that trials stalled after Phabricator and GitLab. As far
as I'm concerned, we can put it to a poll or vote, although I think
capturing the general sentiment of the KDE community rather than a
smaller subset of interested people is likely to be difficult. I ran the
trials of Phabricator and GitLab in the past and I'm happy to do trials
of Gogs and Phabricator now (Gerrit obviously doesn't need this as it's
currently being trialled).

Regardless, any move to a different tool will require compromises, and
will make some users happy and others unhappy. But it's not reasonable
to expect the sysadmins to support multiple parallel systems, and I
really do think that until we figure out what we actually want, and by
we I mean the general developer community, it's somewhat wasted time --
especially because until we know what our ideal end goal is, we won't
know if the compromises we need to make in pursuit of that end goal are
worth making, which makes it easy to keep discarding solutions that
don't quite measure up.


Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/17/2014 - 07:01

Hi Jeff, thanks for a very reasonable mail, I don't have much to add to it
in general, except for one item:

Maybe there is a misunderstanding of some kind -- I do not expect sysadmins
to take care of a system like Gerrit. I'm volunteering for doing this work,
and at Akademy some other people expressed general interest in helping out,
too. I do not forsee any increased load on our sysadmins due to Gerrit.

With kind regards,

Re: Problems with infrastructure

By Jeff Mitchell at 12/17/2014 - 09:47

Hi Jan,

I understood that to be the case -- I'm really meaning for a general,
KDE-wide solution.

Personally I don't have an issue with volunteers taking care of
non-official systems if it helps their productivity. If Gerrit wasn't
where KDE as a whole went, and you wanted to put the effort in to keep
Gerrit working for you and integrated with the rest of the KDE systems,
more power to you.

The issue I see (which doesn't necessarily reflect my personal views) is
that KDE projects have been required to use KDE infrastructure. I forget
where that's written/required, but I do know that that it exists. The
purpose of this was to avoid fragmentation or making it difficult to
find the full breadth of KDE projects, or requiring KDE developers to
sign up for multiple e.g. bugtracking systems just to comment on another
KDE project.

I tend to be productivity-oriented rather than dogmatic, but I certainly
don't speak for everyone on that point.


Re: Problems with infrastructure

By Sebastian =?utf... at 12/18/2014 - 09:52

On Wednesday, December 17, 2014 08:47:09 Jeff Mitchell wrote:
I think you're referring to There's nothing in there about
Reviewboard, or a requirement that project have to use infrastructure hosted
at, so I don't see that as a blocker.

Of course it would be prudent to give KDE's sysadmin's access at some point,
but it's not required per se.


Re: Problems with infrastructure

By Albert Astals Cid at 12/19/2014 - 18:26

El Dijous, 18 de desembre de 2014, a les 14:52:12, Sebastian Kügler va
I don't agree with that you said, manifesto says

"Online services associated with the project are either hosted on KDE
infrastructure or have an action plan that ensures continuity which is
approved by the KDE system administration team"


Re: Problems with infrastructure

By Sebastian =?utf... at 12/23/2014 - 09:41

On Friday, December 19, 2014 23:26:15 Albert Astals Cid wrote:
I stand corrected.

Re: Problems with infrastructure

By Jeff Mitchell at 12/19/2014 - 11:53

I seem to remember this being a big problem when Necessitas was joining
KDE...they had to move their repos. But maybe I either misunderstood or
am misremembering what the issue was at the time.


Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/18/2014 - 10:54

On Thursday, 18 December 2014 14:52:12 CEST, Sebastian Kügler wrote:
Hi, that's been always the case, all sysadmins have root access, and they
also have the "admin" role within Gerrit.


Re: Problems with infrastructure

By Ben Cooksley at 12/16/2014 - 18:58

On Wed, Dec 17, 2014 at 10:23 AM, Jan Kundrát < ... at kde dot org> wrote:
Hi Jan,

I was referring to the audits here - commits can't make it into Gerrit
which could never be replicated to

If Gerrit were to become primary infrastructure then there would be an
expectation it retrieves SSH keys from LDAP in some way.
This is how Subversion and Git work from the developers point of view.

The cost varies depending on the tool. For some things - like Drupal -
the cost is quite minimal, especially when well known and used modules
are used.
The more integration which is required, the higher the cost basically.

Anything around source code management requires higher level
integration as a general rule. People expect hooks to close things, CI
integration and the like.

Depends on what functionality it is missing for becoming a full blown
KDE tool. If we keep it strictly as a code review tool, you still need
to integrate the audit hooks.
Plus it needs to work with whatever our CI system is - and that is
currently Jenkins.

If we were to replace Jenkins, you have indicated that custom work
would be required to get reports for tests and tools like cppcheck
generated and published.
That has a maintenance cost as well.

We don't postpone things indefinitely. This change around code review
and project management has been brewing for a long time.

Because Gitolite doesn't offer people a UI to change their SSH keys.
Modification would still be needed to Gerrit to switch this off - and
to put a message telling them to head to KDE Identity.

There is nothing publicly exposed. I would prefer to avoid tieins to
the web app itself, as it needs replacement as well.
As for the backend, you can use standard syncrepl to watch for
changes. OpenLDAP doesn't support anything else.

As a general rule, despite a few people hinting that it would be nice,
nobody has indicated they would be willing to work on it.
I certainly haven't seen any research on it.

Don't know if you are talking about us or upstream here.
For us - Jenkins runs on a Debian Squeeze-LTS system at the moment,
which locks it to Java 6.

When I mention "don't have the power" I mean "we don't have it at the moment".
I always assume more resources aren't available when considering
things - as in the current climate it is best not to over expect

Define "perform checks in parallel" please.

Jenkins itself can handle that fine - the scalability issues I see are
hardware limitations.
We only have 3 systems at the moment - and only one of those is of
high CPU calibre.

The problem is many of the options we have seen thus far do both.
Phabricator for instance.

Most of this is the capability of the CI system itself.

So to cut a long story short: you want native Git refs and nothing less will do?


Re: cppcheck on

By =?iso-8859-1?Q?... at 12/17/2014 - 13:30

On Tuesday, 16 December 2014 23:58:02 CEST, Ben Cooksley wrote:
Hi Ben, what I said is that generating pretty plots about historical trends
of the number of build warnings and the number of cppcheck issues is
something which Jenkins excells at, that I do not see any value in having
these graphs, and that they do not make sense in case of pre-merge CI. With
a pre-merge CI, there's no linear history anymore, so having Jenkins-style
graph showing how many warnings were produced over the course of last week
is a metric which, IMHO, doesn't make sense. How should such a graph look
like, considering that the history might have many dead-ends?

I made a quick look into getting the cppcheck data published into Gerrit,
but when testing this, I got back data which I cannot interpret properly.

1) For KIO's master branch with Qt5 (this is KF5, right?), the website [1]
says that there are no errors, no warnings. However, if I run the tool
locally on my laptop with no arguments, I get:

[src/widgets/jobuidelegate.cpp:78] -> [src/widgets/jobuidelegate.cpp:75]:
(error, inconclusive) Possible null pointer dereference: w - otherwise it
is redundant to check it against null.

If I use the same arguments as specified in websites/'s
config/build/global.cfg, I get a *ton* of warnings. Even after filtering
out missing include files, I still get quite a few warnings, such as:

<error id="uninitMemberVar" severity="warning" msg="Member variable
'KUrlCompletionPrivate::list_urls_no_hidden' is not initialized in the
constructor." verbose="Member variable
'KUrlCompletionPrivate::list_urls_no_hidden' is not initialized in the

2) I don't see any cppcheck results for Trojita. The web page at [2] says
"This plugin will not report Cppcheck result until there is at least one
success or unstable build.", but there have been 278 such builds so far. I
got similar results for the other build variations.

Now, I might be doing something terribly wrong. Could you please point me
in a right direction?

-> Are you sure that the version of cppcheck on build.k.o works properly
right now? All projects which I ranodmly picked report 0 warnings; that's
surprising given that there are some compiler warnings and I've always got
more invalid warnings from cppcheck than from my compiler(s).

-> How are the custom include directories passed to cppcheck? Can it find
the headers of the dependencies?


[1] <a href="" title=""></a>
[2] <a href="" title=""></a>

Re: cppcheck on

By Ben Cooksley at 12/17/2014 - 15:23

On Thu, Dec 18, 2014 at 6:30 AM, Jan Kundrát < ... at kde dot org> wrote:
The graph is more useful for the post-merge phase I will admit. It is
used by developers to track their progress and to watch for
deviations, so I know it is useful.
Please remember to see both sides of the coin here. Pre-merge CI is
not the only type of CI.

Regardless - we should only have *one* system for doing CI runs,
regardless of whether they are post or pre merge.
Two systems would mean twice the maintenance effort (projects have to
be registered with it, etc) - and need twice the infrastructure unless
they can somehow collaborate on who is using which resources. That is
simply wasteful.

The version works perfectly fine. cppcheck simply is not run for the
majority of projects as it is not enabled to conserve resources.
cppcheck, much like gcovr (Cobertura in Jenkins) are only enabled on
request for a project.

See various files in config/build/ for an example of how this is done.
Zanshin and Skrooge use this.

It is also disabled by default as for some projects it can make builds
take an extremely long time to finish - due to the copying over to the
Jenkins master of materials needed to produce the reports. This
affects kdelibs in particular.

Not sure if this is needed - previously it has worked fine. This could
be due to our environment variables though.


Re: Problems with infrastructure

By =?utf-8?Q?Thoma... at 12/15/2014 - 10:59

Given what he wrote, how he wrote and *when* he wrote, he probably has a very hard day - after figuring that *yesterday* was Sunday ;-)


If you believe that a drunken person can not type correctly - that does not comply with the Austrians I happen to know :)
Large parts of the mail do not make sense at all, though.

Re: Problems with infrastructure

By Aleix Pol at 12/14/2014 - 22:09

On Mon, Dec 15, 2014 at 2:58 AM, Kevin Kofler <kevin. ... at chello dot at> wrote:
Hi Kevin,
I don't appreciate the tone. Jan is being constructive here and has
been working together with us to improve our infrastructure. Your
comment doesn't help in any way.


Re: Problems with infrastructure

By Milian Wolff at 12/15/2014 - 05:48

On Saturday 13 December 2014 18:13:41 Albert Astals Cid wrote:
Personally, I don't see why its a bad thing to have two options, if both
fulfill a different users needs. Reviewboard is apparently liked by some, and
its certainly simple to send trivial patches with it. Gerrit otoh is much
better for people who work a lot on projects, as you can get much more
productive with it. You just use git, and the rest is handled by the web ui.

I can just say that I like using it in the setup they have for Qt. Much more
productive to work on patch sets and then pushing them to an alias remote.
Then I can fix them up and/or rebase and push again to update everything. With
reviewboard, I'd need to manually push each individual patch, and updating
them is again as much work.


Re: Problems with infrastructure

By Albert Astals Cid at 12/15/2014 - 15:35

El Dilluns, 15 de desembre de 2014, a les 10:48:16, Milian Wolff va escriure:
I've already written it in lots of places but i'll try to do it again:
* Makes it harder for newcomers (and developers in general) since they have
to find out which of the two patch review systems to use for repository
* People need to llearn two tools instead of one to contribute patches to
"KDE projects"
* Increases our maintaince effort (there's two web systems that can break
instead of one)

And i could find more, but i sincerely think these three are "bad enough".


Re: Problems with infrastructure

By Luigi Toscano at 12/10/2014 - 17:01

Albert Astals Cid ha scritto:
git-review should solve (or ease) most of the issues there (send and update
the reviews).

See other projects using it:
<a href="" title=""></a>
<a href="" title=""></a>

In order to check the existing reviews, there are other tools with various
degree of completeness like:
* <a href="" title=""></a>
* <a href="" title=""></a>

(yep, many coming from the OpenStack project :)


Re: Problems with infrastructure

By Christian Mollekopf at 12/12/2014 - 04:35

On Wednesday 10 December 2014 16.53:09 Jan Kundrát wrote:
I do like being able to view patches individually, so that's fine by me. I
just want to avoid sending and reviewing individual patches when they belong

The current workflow I'm using for reviewing is that I review feature branches
developed by others, and if I'm happy with them I merge them and delete them
from upstream.

What I lack with that is of course a tool to communicate about defects in the
provided patches. Reviewboard can be used for that, but creating a reviewboard
entry for every commit is IMO too much work, and having all commits merged
isn't all that useful as it get's messy.

So if gerrit allows to treat feature branches as a whole while not merging the
commits it may just be what I need.


Re: Problems with infrastructure

By Aaron J. Seigo at 12/11/2014 - 02:48

On Wednesday, December 10, 2014 15.27:31 Ben Cooksley wrote:
I suspect the issues are the same ones that led us to experiment with gitlab a
while back. Christian's response seems to back that up.

A modern, consistent, unified, user-friendly interface to the source lifecycle
of a project is pretty desirable. Github and the like have really set people's
expectations there pretty high. Having a bunch of separate tools with
somewhat-to-less-clunky interfaces, each with their own individual learning
curves is just not very attractive when Github sits there shiny and usable.
It's also what more and more new developers get to know through their first

Whether gerrit is a useful too or not, another separate tool won't bring the
desired changes. It's an attempt to answer a rather different question,

I don't know if it matters to KDE or not, but if appealing to new generations
of developers and keeping existing ones as happy as possible is a goal[1],
then it would make a lot of sense to orchestrate a move to something that
provides such a "github-like" experience, even if it has other drawbacks.
Those drawbacks probably don't matter as much. If they did, github wouldn't be
thriving quite as much as it does.

[1] thinking that it "would be nice" or "that would be a good idea" does not
make it a goal

Re: Problems with infrastructure

By Luca Beltrame at 12/11/2014 - 05:03

Hello Aaron,

See my reply to Christian to see where (to my knowledge) we are at the

Re: Problems with infrastructure

By Albert Astals Cid at 12/11/2014 - 18:23

El Dijous, 11 de desembre de 2014, a les 07:48:51, Aaron J. Seigo va escriure:
and siloed and non-free.

is it thriving in our target audience?


Re: Problems with infrastructure

By Aaron J. Seigo at 12/12/2014 - 06:27

On Thursday, December 11, 2014 23.23:59 Albert Astals Cid wrote:
Indeed, and unfortunately not enough people care about that.

If the target audience for KDE's git infrastructure is "developers": yes.
Overwhelmingly yes. Especially younger developers and people new to the OSS
scene; it seems to be the first contact with public source code repos for
many, many people these days.

Qt developers? Yes, again. Nearly 13,000 repos mention Qt in their name. Many,
many more use Qt (I've grabbed a few such projects that use Qt but mention it
nowhere in the project name or description ...)

KDE developers? 1800 repositories, though many of those are false-positives.
(in particular: Kernel Density Estimation)... so let's assume some fraction of
that. 1000?

Re: Problems with infrastructure

By Albert Astals Cid at 12/12/2014 - 17:48

El Divendres, 12 de desembre de 2014, a les 11:27:24, Aaron J. Seigo va
But we do :)

That's indeed a respectable number (i was going to check by myself but I can't
access github from here at this moment)

So what do you suggest, because we already tried gitlab and didn't work, is
there any more github clones out there that may work for us? I don't think us
going there and coding one is feasible :D


Re: Problems with infrastructure

By Scarlett Clark at 12/19/2014 - 17:16

Hello all..
I have tried to stay way from this thread, but I have reached my breaking
point. I have been literally killing myself working on my SoK project which
entails upgrading our Jenkins CI system and getting all the projects working
on Linux/Windows/OSX and I have moved to a docker based setup which will
allow further growth and expansion, plus portability. Jenkins is compatible
and works with Gerrit, so I don't understand why another CI is being
considered. Reviewboard is on my to-do list as well as I have stated on
several occasions. Anyway, I feel quite discarded in this thread. Back to not
working on anything for long hours, everyday.

Re: Problems with infrastructure

By =?iso-8859-1?Q?... at 12/21/2014 - 09:01

On Friday, 19 December 2014 22:16:36 CEST, Scarlett Clark wrote:
Because when I started this effort this spring, appeared to
be on life support. I also wanted to expand the number of tools I
understand, and make sure that I can evaluate various CI tools without a
baggage of having to stick with a particular CI solution "just because
we've always done it that way". That's why I started looking at various
tools, and the whole stack which the OpenStack infrastructure team have
built [1] looked extremely compelling (note: they still use Jenkins).

The killer feature for me was their support for testing everything, where
each and every commit that is going to land in a repository is checked to
make sure it doesn't introduce any regressions. Not on a per-push basis,
but on a per-commit basis. This is something which has bitten me in the
past where I occasionally introduced commit series which contained
occasional breakage in the middle, only to be corrected in a subsequent
commit. That was bad because it breaks `git bisect` when one starts looking
for errors discovered in future, by unrelated testing.

At the same time, doing per-commit tests means that these tests have to run
in parallel, and that there must be $something which controls this
execution. This is well-described at [2], so I would encourage you to read
this so that you understand what the extra functionality is. To the best of
my knowledge, Jenkins cannot do that. Based on your level of experience
with Jenkins, do you think it's possible with it alone? Would that also
possible on a cross-repository manner?

Now that we've established a necessity to use something extra to control
this logic of job executions, we still have some use for Jenkins of course.
Something has to serve as an RPC tool for actually scheduling the build
jobs on various servers/VMs/workers/chroots/dockers/whatever. The question
is whether it still makes sense to use Jenkins at that point given that the
nice features, such as being able to track the history of the number of
failing tests, having pretty dashboard pointing to faulty commits etc etc
on one hand and having to create a ton of XML files with build job
definitions on the other hand. Does Jenkins still provide a net positive

The system which I was building had no need for drawing graphs of compiler
warnings per project throughout past two months. What I wanted to have was
an efficient system which will report back to the rest of the CI
infrastructure the result of a build of a proposed change to help keep a
project's quality up to a defined level. The only use of Jenkins in that
system would be for remotely triggering build jobs, and somehow pushing the
results back. I do not think that going through all of the Jenkins
complexity is worth the effort in that particular use case.

BTW, the way in which KDE uses Jenkins right now does not really make use
of many Jenkins functions. The script which prepares the git tree is
custom. The management of build artifacts is reimplemented by hand as well.
In fact, most of the complexity is within the Python scripts which control
the installation of dependencies, mapping of config options into cmake
arguments etc. These are totally Jenkins-agnostic, and would work just as
well if run by hand. That's why I'm using them in the CI setup I deployed.
Thanks for keeping these scripts alive.

So in the end, I had a choice of either using Jenkins only to act as a dumb
transport of commands like "build KIO's master branch" and responses such
as "build is OK", or bypassing Jenkins entirely and using $other_system. If
the configuration of the $other_system is easier than Jenkins', then it
seemed to be a good idea to try it out. Because I was already using Zuul
(see the requirement for doing "trunk gating" and speculative execution of
the dependant jobs as described in [2]), and Zuul uses Gearman for its
RPC/IPC/messagebus needs, something which just plugs into Gearman made a
lot of sense. And it turned out that there is such a project, the
Turbo-Hipster thing. I gave it a try. I had to fix a couple of bugs and add
some features (SCPing the build logs, for one), and I'm quite happy with
the result.

But please keep in mind that this is just about how to launch the build
jobs. TH's involvement with the way how an actual build is performed is
limited to a trivial shell script [3]. And by the way, the OpenStack's CI
which inspired me to do this is still using Jenkins for most of its build
execution needs.

Anyway, as I'm following these discussions, I think you don't really have
many reasons to start being afraid that even that part of your work which
is 100% Jenkins-specific would come to no use. There appears to be a huge
inertia for sticking with whatever we're using right now Just Because™,
even if all technical problems are/were solved. This effort started ages
ago with an "I would like to have multiplatform, early CI for Trojita", and
now we're discussing "replacing the project management tool within KDE".
I'm being told requirements such as "it would be cool to have a single tool
doing everything" or "the CI has to support Mercurial and SVN". Other
people feel threatened by my work, and yet other people believe that having
alternatives is a bad thing.

See, I'm happy that I can now finally use tools which make my work on my
pet project reasonably efficient. I also think that it's cool to offer
these tools to other people within KDE. Based on face-to-face discussion
during Akademy, people who were present appeared to be interested, so I did
the work on this. I fully integrated this stack with the rest of the KDE's
infrastructure, and make sure that all KDE projects can use these tools if
they want to. The technical solution is now ready, and it's up to the
project maintainers to decide whether they want it or not. If they want to
stick with what they have, more power to them. If they want to wait an
unspecified amount of time until a general decision is reached, more power
to them, too. If they want to switch now, more power to them as well.

It would be great to have alignment and for all of us to use the same tool.
However, I seriously doubt that an alignment can be reached without either
inducing emotional stress on a subset of our community or compromising
efficiency of another subset. Is either of that a price that we want to
pay? I suppose we might know after a long discussion.

The system which is now made available through Gerrit is suitable for
projects that want to make sure the quality of their code is and remains
excellent, who consider a failing test a serious problem which must be
fixed, whose approach to compilers telling them "hey, there's a problem
with your code" is either "doh, you're right, let's fix it" or "nah, you're
wrong, let's silence this warning forever, so that we can cut the noise
down", and who are happy to let a CI system veto their commits if they
introduce detectable breakage. It might not be a good fit for people who
*insist* that the only way of working is to
and that no flaky tests can be fixed and are a neccessity of SW
development. Maybe it will need some reasonably well-contained additions to
support their use cases. Also, not every project which subscribes to these
quality values must use Gerrit, of course.

That system is ready to be used now. I promised I won't be pushing this to
people who don't care, and I intend to keep that promise. If you're a
project maintainer and you would like your project in Gerrit, you have my
support. If, however, you would like to revamp the review/CI/... platform
used throughout KDE as a whole, well, more power to you as well. I'll be
happy to help, but at this point I cannot imagine myself being a driving
force behind this. I expect my role to be limited to correcting factual
mistakes in Gerrit/Zuul descriptions from now on and helping iron out
possible problems with this setup. I, too, put quite long hours into making
KDE's CI better and I've so far felt substantial hostility by daring to
propose an alternative to How Stuff Has Always Been Done.

With kind regards,

[1] <a href="" title=""></a>
[2] <a href="" title=""></a>
<a href=";a=blob&amp;" title=";a=blob&amp;">;a=blob&amp;f...</a>

Re: Problems with infrastructure

By Ian Wadham at 12/23/2014 - 05:19

Hello Jan,

On 22/12/2014, at 12:01 AM, Jan Kundrát wrote:
Well, it's Christmas, the season of goodwill.

So, Jan, how does all this explanation help Scarlett?

Does she have a mentor? Is anybody helping her?

Does anybody wonder why people keep drifting away from the KDE community,
as Scarlett appears to be about to do?

Read again what Scarlett has to say. I do not think her main message was to ask
"why another CI is being considered". She feels "discarded" by it all.

All the best,
Ian W.