Thanks Jeremy. To come back to the original idea, for an LHF crew how
about as these:
- a weekly(?) list of LHF tickets sent to dev list
- sharing fliters of the above (can we put these on the how to contribute page?)
- adopt Jeff B.'s idea of tagging LHF your own self as part of our
culture/proces
It sounds like everyone is on the same page - there won’t be a rubber stamp.
It’s just to have a concerted effort to help these tickets move along as
they’re often the first experience that contributors have with the project.
I’m sure many of you know of the ‘lhf' tag that’s out there with an
Tagging tickets as LHF is a great idea. There's plenty of people that
would love to set up a JIRA dashboard saved search / nightly email for LHF
tickets.
On Wed, Oct 19, 2016 at 1:34 PM Jeff Beck wrote:
> Would it make sense to allow people submitting the patch to flag things as
> LHF or small
Would it make sense to allow people submitting the patch to flag things as
LHF or small tasks? If it doesn't look like it is simple enough the team
could remove the label but it may help get feedback to patches more
quickly, even something saying accepted for review would be nice.
Personally if a
Also no one has said anything to the effect of 'we want to rubber stamp
reviews' so that ...evil reason. Many of us are coders by trade and
understand why that is bad.
On Wednesday, October 19, 2016, Edward Capriolo
wrote:
> I realize that test passing a small tests and trivial reviews will not
I realize that test passing a small tests and trivial reviews will not
catch all issues. I am not attempting to trivialize the review process.
Both deep and shallow bugs exist. The deep bugs, I am not convinced that
even an expert looking at the contribution for N days can account for a
majority
Let’s not get too far in the theoretical weeds. The email thread really focused
on low hanging tickets – tickets that need review, but definitely not 8099
level reviews:
1) There’s a lot of low hanging tickets that would benefit from outside
contributors as their first-patch in Cassandra (like
And just to be clear, I think everyone would welcome more testing for both
regressions of new code correctness. I think everyone would appreciate the
time savings around more automation. That should give more time for a
thoughtful review - which is likely what new contributors really need to g
I specifically used the phrase "problems that the test would not" to show I
am talking about more than mechanical correctness. Even if the tests are
perfect (and as Jeremiah points out, how will you know that without reading
the code?), you can still pass tests with bad code. And is expecting
per
Unless the reviewer reviews the for content, then you don’t know if they do or
not.
-Jeremiah
> On Oct 19, 2016, at 10:52 AM, Jonathan Haddad wrote:
>
> Shouldn't the tests test the code for correctness?
>
> On Wed, Oct 19, 2016 at 8:34 AM Jonathan Ellis wrote:
>
>> On Wed, Oct 19, 2016 at
Shouldn't the tests test the code for correctness?
On Wed, Oct 19, 2016 at 8:34 AM Jonathan Ellis wrote:
> On Wed, Oct 19, 2016 at 8:27 AM, Benjamin Lerer <
> benjamin.le...@datastax.com
> > wrote:
>
> > Having the test passing does not mean that a patch is fine. Which is why
> we
> > have a rev
On Wed, Oct 19, 2016 at 8:27 AM, Benjamin Lerer wrote:
> Having the test passing does not mean that a patch is fine. Which is why we
> have a review check list.
> I never put a patch available without having the tests passing but most of
> my patches never pass on the first try. We always make mi
Having the test passing does not mean that a patch is fine. Which is why we
have a review check list.
I never put a patch available without having the tests passing but most of
my patches never pass on the first try. We always make mistakes no matter
how hard we try.
The reviewer job is to catch th
Yes. The LHFC crew should always pay it forward. Not many of us have a
super computer to run all the tests, but for things that are out there
marked patch_available apply it to see that it applies clean, if it
includes a test run that test (and possibly some related ones in the
file/folder etc for
On 19 October 2016 at 05:30, Nate McCall wrote:
> if you are offering up resources for review and test coverage,
> there is work out there. Most immediately in the department of reviews
> for "Patch Available."
>
We can certainly put some minds to this. There are a few of us with a very
good und
That's a good idea, Ed, thanks for bringing this up.
Kurt, if you are offering up resources for review and test coverage,
there is work out there. Most immediately in the department of reviews
for "Patch Available."
While not quite low-hanging fruit, it's helpful to have non-committers
look throu
So there are a bunch of us at Instaclustr looking into contributing to the
project on a more frequent basis. We'd definitely be interested in some
kind of LHF crew under whom our new "contributors" could make tracks on
becoming main committers, while having some close by colleagues who can
help the
, but the pathway to
removing this burden is promoting contributors to committers.
My suggestion:
Assemble a low-hanging-fruit-crew. This crew would be armed general support
for small commits, logging, metrics, test coverage, things static analysis
reveals etc. They would have a reasonable goal like
18 matches
Mail list logo