It seems that, yes, a model in which a patch to a given component would
need reviews by any one person in a specific pool of reviewers makes a lot
of sense. Even more so if the patch submitter doesn't have to figure out
who is in the pool.

Having a mechanism that supports a secondary, high-level review for cases
involving a lot of interrelated patches, architectural changes, UX updates,
etc. would also make a lot of sense. Call it a meta-review, perhaps; as in
reviewing a meta bug's set of changes.

-- 
Eric Shepherd
Senior Technical Writer
Mozilla Developer Network
https://developer.mozilla.org/
Blog: http://www bitstampede.com/
Twitter: https://twitter.com/sheppy


On May 10, 2018 at 11:15 AM, Gregory Szorc <[email protected]> wrote:



On May 10, 2018, at 06:47, Randell Jesup <[email protected]> wrote:

On Thu, Apr 26, 2018, at 12:41 AM, Mark Côté wrote:
How we might use blocking reviewers in our workflow is still open, but
it could be used for training up reviewers, in which case the trainee
would be a regular reviewer and the module peer/owner would be a
blocking reviewer.


It's not uncommon for me to submit patches for review from multiple people
where I require all reviewers to sign off on the patch, usually because I
ask them to look at different parts of the patch. I don't think I have
ever sent a patch to get review from more than one person with the
intention of landing it if only one person OKs it. (I'd probably needinfo
people to get that kind of feedback.) So ignoring potential new workflows
like the training one you mention, I think I would exclusively use blocking
reviewers. It's good to know that Phabricator will help me avoid
accidentally landing patches when not all of my reviewers have signed off.


Agreed... but it sounds like they're not quite sure how this will be
used. I'm concerned that developers may be confused and just as for
review by N people, not realizing it can be landed when one of them
r+'s. (Especially if the developer doesn't frequently use multiple
reviewers.) At minimum, if this how it works, there will been to be
clear communication about when to use this - or (better!) to switch the
default to blocking, and have the unusual/more-work-to-ask-for case be
any-of.

Once in a blue moon I'll ask for a fast review from any of N people,
then cancel the r?'s once i have 1 r+. but this is really rare, and
mostly when there's a time deadline to land something ASAP (sec issue,
or to get in ahead of a merge date). 99% of the time when I ask for N
people, I need r+ from all of them.

On the other side, I do know that the build peers (IIRC) us a shared
reviewing setup, where most bugs are thrown in a pool for any of them to
review. But that's not the normal workflow for any other group I know
of in Mozilla, at least in Firefox. (I don't know them all, though.)


When you take a step back, I think you have to conclude that our current
model of requesting reviews from *individuals* is not practical for the
common case. The way our code governance model works is that groups of
people (module members) are empowered to sign off on changes. But because
existing review tooling (practically speaking) only allows assigning
reviews to individuals (as opposed to a group consisting of module
members), that’s what we do. I.e. tools are dictating sub-optimal workflows
that don’t match our governance model.

In the common case, I think group assignment is preferred. Maybe it assigns
to an available person at submission time. But the submitter should not
need to be burdened with picking an individual. (This picking requirement
is especially hostile to new contributors who don’t know who to pick for
review.)

Phabricator has the concept of a normal “reviewer” and a “blocking
reviewer.” “Blocking” means they must sign off before the review is marked
as accepted.

I think the world we’ll eventually get to is one where most reviews are
assigned to groups (ideally automatically by mapping files changed to
appropriate reviewers) as normal non-blocking reviewers in Phabricator. For
special cases, we can assign individuals for review and/or set “blocking
reviewers” so someone or a member of a group *must* sign off. The review
submission tooling should make all of this turnkey - allowing people to
easily opt in to the special case if warranted. But most review requirement
policies should be codified so tools can apply/enforce them without human
intervention.

It’s worth noting that in a future world where the review tool isn’t
constraining us to assigning reviews to individuals and where group
assignment is the default, assignments to individuals may represent
shortcomings in the distribution of expertise. I.e. if only one person is
capable of performing a review, then we have a bus factor of 1 and there is
a business continuity concern. Assigning reviews to groups - perhaps
specialized groups like “dom experts” instead of the normal “dom peers”
group - helps reinforce that reviewing - and the knowledge required to
perform it - is a shared responsibility and isn’t limited to single
individuals. This is another reason why I think it is a good idea to lean
heavily towards group-based review over individual. Even if the group
consists of a single person, a group reinforces that the responsibility is
distributed.

I think that once review submission doesn’t require submitters to choose
reviewers and reviews magically end up on the plates of appropriate people,
we’ll all look back at our current workflow and view it as primitive.
_______________________________________________
dev-platform mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-platform
_______________________________________________
dev-platform mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to