Review policy


#1

Code reviews are carried out on github pull requests.

Any Juju core team member can review any pull request. However as the branch author it is your responsibility to ensure that the reviewer sufficiently understands the changes.

approval

All branches prior to being landed need to have an approved :+1: review from another team member.

Approved branches can be merged using the $$merge$$ comment by an team member.

The exception to this rule is simple forward or back porting of a fix, or merging a previous release branch forwards.

conflict

Sometimes, as happens when there are more than one person involved, there can be conflicting opinions on how to do things.

For very minor things, most of the time it doesn’t matter. As long as the code works and is well tested, we’re fine.

Sometimes it goes deeper than that. There are issues of code consistency to consider, and patterns of development within the codebase. Examples might include how we version API server facades.

The best way to resolve these issues is to get a call going between the branch author and reviewer to work through the issues.

blocking landing

In the severest of cases, a reviewer can block the landing by a -1 comment, or :-1:

In these situations if the conflict can’t be resolved between the author and reviewer, escalate to the appropriate managers.

A review that has been blocked should not be landed without the reviewer that has given the :-1: removing the block.


#2

We’ve talked about in the past if the changes are only to CI python test files that once approved you can hit the green merge button, as none of the functional code or unit tests have been touched.

Is this still the case?


#3

Our team at least is leaning towards a “no” on that one. Talk is also tending towards not pushing direct to master for juju-qa-jenkins either.

I think this is a solid idea, as even the most trivial of changes can be muffed. I provide my own ham-fisted effort as proof.


#4

So the issue is that I don’t think human code review would have caught the problem with your change, as it looks valid. And the real issue is whether it can be passed to Jenkins correctly.
Our options would be:

  1. Split out a staging Jenkins that we could use for iterating on patches to the jenkins config. And then only allowing a bot to actually merge that to master.
  2. Make it plausible to run up your own Jenkins for testing.
  3. Do as we are, and iterate on the primary Jenkins instance. In which case, you’ve already rolled out the changes by the time you put them up for review.

Are there other reasonable options?


#5

Regarding the CI python test, I see no reason to do a unit test run for a PR like that as none of the CI tests get run so none of the changed code will get exercised (if we add a CI test or two to the check/merge pipeline this changes things).
My suggestion would be to hit the merge button on purely python test change PRs and save some time.

I am 100% +1 for PRs only for juju-qa-jenkins. It’s been a while since I’ve pushed directly due to bad old habits, but I have done it.

If anything the PRs allows someone else to know what’s going on.

For simple validation that doesn’t rely on the author having already deployed their changes we could have a CI step (travis? or selfhosted, it is a private repo) that does jenkins-jobs test -r jobs/ci-run (and for jobs/infrastructure, jobs/github, jobs/mergejobs) to at least show the changes are valid.


#6

(breaking out my replies across posts to stay relevant to the content).

The problem with having a local or testing jenkins is that you really need to run the job to make sure it works as expected, as the jenkins-jobs test ... will make sure the yaml makes sense but not that what it does is correct.

We also have the added complexity of having an ‘internal’ and ‘external’ jenkins instance that is split on the PR/CI responsibility line.

A ‘simple’ but perhaps time consuming process step would be:

If you change a job (i.e. after it’s landed and deployed) you must make sure the next run (or 2) are successful.
This may require re-running a previous run if needed, so you’re not waiting around for hours

This at least would catch any easy errors. There would need to be care where if you change a ‘core’ script (i.e. the lxd-runner) you would need to check a handful of jobs, some which you probably weren’t interested in changing.

On that note, I feel I’ve let the jenkins stuff get out of hand and it’s really quite hard to understand what’s happening with all the different scripts and macros that call other macros that slurp in scripts that have placeholders that are passed in from the original job many levels up.
I really wish I had found a way to make this nicer to use.


#7

A minor technical point on how to approve/block.

I would prefer that we use GitHub’s functionality, e.g. click “Review Changes”, then “Accept” and “Request Changes”, rather than using raw comments. That will play nicer with downstream tooling, such as Trello.


#8

I just ran into this as well, I didn’t update with JJB without getting approval, but that just meant my patch was very incomplete.
Original patch:
https://github.com/CanonicalLtd/juju-qa-jenkins/pull/192
patch fixing the rest of the issues:
https://github.com/CanonicalLtd/juju-qa-jenkins/pull/193

There were at least 4 things wrong with my original proposal, but it all looked reasonably sane.
Having non-experts do manual review might help them get better, but it doesn’t lead to catching a lot of issues.

I think the big key is to iterate fast, and be on the hook for monitoring if things are working as expected.