On Saturday, 31 January 2015 11:14:01 CEST, Ben Cooksley wrote:
Fixing a usability glitch and accepting a radical redesign of your
interface are completely different.

Your mail suggested that they apparently do not care about improving their UI, because if they did, they would have solved everything already. I disagree with that, and provide evidence which supports the idea that Gerrit upstream in fact also cares about users, including those who are not already experienced with its UI.

We're not the first ones to complain about it's interface but they've
not done anything to remedy this despite a complete redesign (which
reduced the level of functionality interestingly).

How does the ChangeScreen2 reduce the level of functionality?

I see the following in that section:

1) A note that Jenkins is a "glorified job launcher" as we don't use
any of it's advanced functionality (which I refuted - it is much more
than that).

You reiterated that it is cool to have graphs tracking number of failed tests. I proposed to fix the tests instead, and offered a solution which eliminates this problem for tests that are inherently unstable (see 3.3.2.). I also explained how running cppcheck and code coverage fits into this.

The way I understand this reasoning is: "we have failing tests" -> "we got to have graphs so that people know whether any more tests start failing". That sounds rather suboptimal to me. I would much prefer to fix the actual cause of pain rather than to provide tools which plot the pain level and frequency of patient's seizures. Defect tracking is a tool, not a goal. If there is another tool which ensures that no additional defects can enter a repository, why not simply use that? (Please see the report for dealing with non-deterministic tests; this *is* covered.)

2) Some notes that a proposed patch may be based against a week old
revision. This is making assumptions about how a Jenkins setup would
be made - as we're in control of what it does there is nothing to stop
us trying to apply the patch on top of the latest code in Git.

You have to somehow account for the delay between a human reviews a patch and the time it gets merged. Any patch could have landed in the meanwhile. What builds a patch queue so that you have this covered?

In terms of checking dependency buildability, once again - this is
possible but we don't do anything like this at the moment to preserve
resources.

Given enough CPUs, how would you do this with our current Jenkins setup? This is absolutely not just a resource problem; you need something to build these projects against appropriate refs, and do this in an atomic manner. Zuul does it, and internally it's a ton of work. Jenkins does not do it. KDE's CI scripts do not do it, either.

As for it not having a declarative configuration, we're in the process
of refining the Groovy based Job DSL script Scarlett wrote. This will
shift the configuration of Jenkins jobs entirely into Git, and
depending on how things work out - jobs could be automatically setup
when new repositories come into existence or when branch metadata is
revised.

The report said that there are automated tools which provide workarounds for this aspect of Jenkins. It's good to see KDE adopting one now.

However, you are investing resources in making a tool with horrible configuration more usable. More power to you, it's your time, but this is exactly what the report says -- you are working around the tool's limitations here.

About the only point left standing is that it doesn't check individual
subcommits, but we've yet to see whether the KDE project as a whole
sees this as necessary

The fact that most of KDE's projects have no use of pre-merge CI does not imply that projects who want to opt-in should be punished. This is absolutely *not* about pushing an advanced workflow to everybody. It is about being *able* to accomodate such an advanced workflow at all.

This is possible today with Gerrit+Zuul, and it was easy to configure that way. Our Zuul builds the dependencies (=projects "outside of Gerrit") on a per-push basis, exactly how KDE's Jenkins does it. There is no time wasted on doing per-commit builds for these, because nobody could react on failures anymore -- a change is already in master by that point.

What this is all about is enforcing that each commit which goes through code review is regression free.

Cheers,
Jan

--
Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/

Reply via email to