Thanks Jeremy. To come back to the original idea, for an LHF crew how
about as these:
- a weekly(?) list of LHF tickets sent to dev list
- sharing fliters of the above (can we put these on the how to contribute page?)
- adopt Jeff B.'s idea of tagging LHF your own self as part of our
culture/process
- ping on a LHF Patch Ava. ticket if its been reviewed and just hanging out

Extra credit:
- ping on a PA ticket (dev list even) if its not been reviewed, but
you think it's valuable/critical and it's past your review comfort
level



On Thu, Oct 20, 2016 at 11:39 AM, Jeremy Hanna
<jeremy.hanna1...@gmail.com> wrote:
> It sounds like everyone is on the same page - there won’t be a rubber stamp.  
> It’s just to have a concerted effort to help these tickets move along as 
> they’re often the first experience that contributors have with the project.  
> I’m sure many of you know of the ‘lhf' tag that’s out there with an 
> associated Jira search [1] on the How to Contribute page [2].  In the Jira 
> search, you can see several that are either “patch available” or “awaiting 
> feedback” so that search a place to start.  As for something fancier like a 
> dashboard, I currently can’t make a share-able dashboard as a contributor to 
> the project.  Perhaps committers have more permissions there.
>
> 1) 
> https://issues.apache.org/jira/issues/?jql=project%20%3D%2012310865%20AND%20labels%20%3D%20lhf%20AND%20status%20!%3D%20resolved
> 2) https://wiki.apache.org/cassandra/HowToContribute
>
>
>
>> On Oct 19, 2016, at 5:27 PM, Jonathan Haddad <j...@jonhaddad.com> wrote:
>>
>> Tagging tickets as LHF is a great idea.  There's plenty of people that
>> would love to set up a JIRA dashboard saved search / nightly email for LHF
>> tickets.
>>
>> On Wed, Oct 19, 2016 at 1:34 PM Jeff Beck <beckj...@gmail.com> wrote:
>>
>>> Would it make sense to allow people submitting the patch to flag things as
>>> LHF or small tasks? If it doesn't look like it is simple enough the team
>>> could remove the label but it may help get feedback to patches more
>>> quickly, even something saying accepted for review would be nice.
>>>
>>> Personally if a ticket sits for a few weeks I have no idea what next steps
>>> I should take to keep it moving forward. We may want to document what a
>>> person submitting a patch should do next and if it is documented we should
>>> add a link on
>>> http://cassandra.apache.org/doc/latest/development/patches.html
>>>
>>> Jeff
>>>
>>>
>>> PS: My example is https://issues.apache.org/jira/browse/CASSANDRA-12633
>>>
>>> On Wed, Oct 19, 2016 at 12:21 PM Edward Capriolo <edlinuxg...@gmail.com>
>>> wrote:
>>>
>>>> Also no one has said anything to the effect of 'we want to rubber stamp
>>>> reviews' so that ...evil reason. Many of us are coders by trade and
>>>> understand why that is bad.
>>>>
>>>> On Wednesday, October 19, 2016, Edward Capriolo <edlinuxg...@gmail.com>
>>>> wrote:
>>>>
>>>>> I realize that test passing a small tests and trivial reviews will not
>>>>> catch all issues. I am  not attempting to trivialize the review
>>> process.
>>>>>
>>>>> Both deep and shallow bugs exist. The deep bugs, I am not convinced
>>> that
>>>>> even an expert looking at the contribution for N days can account for a
>>>>> majority of them.
>>>>>
>>>>> The shallow ones may appear shallow and may be deep, but given that a
>>> bug
>>>>> already exists an attempt to fix it usually does not arrive at
>>> something
>>>>> worse.
>>>>>
>>>>> Many of these issues boil down to simple, seeemingly clear fixes. Some
>>>> are
>>>>> just basic metric addition. Many have no interaction for weeks.
>>>>>
>>>>>
>>>>> On Wednesday, October 19, 2016, Jeff Jirsa <jeff.ji...@crowdstrike.com
>>>>> <javascript:_e(%7B%7D,'cvml','jeff.ji...@crowdstrike.com');>> wrote:
>>>>>
>>>>>> Let’s not get too far in the theoretical weeds. The email thread
>>> really
>>>>>> focused on low hanging tickets – tickets that need review, but
>>>> definitely
>>>>>> not 8099 level reviews:
>>>>>>
>>>>>> 1) There’s a lot of low hanging tickets that would benefit from
>>> outside
>>>>>> contributors as their first-patch in Cassandra (like
>>>>>> https://issues.apache.org/jira/browse/CASSANDRA-12541 ,
>>>>>> https://issues.apache.org/jira/browse/CASSANDRA-12776 )
>>>>>> 2) Some of these patches already exist and just need to be reviewed
>>> and
>>>>>> eventually committed.
>>>>>>
>>>>>> Folks like Ed and Kurt have been really active in Jira lately, and
>>> there
>>>>>> aren’t a ton of people currently reviewing/committing – that’s part of
>>>> OSS
>>>>>> life, but the less friction that exists getting those patches reviewed
>>>> and
>>>>>> committed, the more people will be willing to contribute in the
>>> future,
>>>> and
>>>>>> the better off the project will be.
>>>>>>
>>>>>>
>>>>>> On 10/19/16, 9:14 AM, "Jeremy Hanna" <jeremy.hanna1...@gmail.com>
>>>> wrote:
>>>>>>
>>>>>>> And just to be clear, I think everyone would welcome more testing for
>>>>>> both regressions of new code correctness.  I think everyone would
>>>>>> appreciate the time savings around more automation.  That should give
>>>> more
>>>>>> time for a thoughtful review - which is likely what new contributors
>>>> really
>>>>>> need to get familiar with different parts of the codebase.  These LHF
>>>>>> reviews won’t be like the code reviews of the vnode or the 8099 ticket
>>>>>> obviously, so it won’t be a huge burden.  But it has some very
>>> tangible
>>>>>> benefits imo, as has been said.
>>>>>>>
>>>>>>>> On Oct 19, 2016, at 11:08 AM, Jonathan Ellis <jbel...@gmail.com>
>>>>>> wrote:
>>>>>>>>
>>>>>>>> I specifically used the phrase "problems that the test would not"
>>> to
>>>>>> show I
>>>>>>>> am talking about more than mechanical correctness.  Even if the
>>> tests
>>>>>> are
>>>>>>>> perfect (and as Jeremiah points out, how will you know that without
>>>>>> reading
>>>>>>>> the code?), you can still pass tests with bad code.  And is
>>> expecting
>>>>>>>> perfect tests really realistic for multithreaded code?
>>>>>>>>
>>>>>>>> But besides correctness, reviews should deal with
>>>>>>>>
>>>>>>>> 1. Efficiency.  Is there a quadratic loop that will blow up when
>>> the
>>>>>> number
>>>>>>>> of nodes in a cluster gets large?  Is there a linear amount of
>>> memory
>>>>>> used
>>>>>>>> that will cause problems when a partition gets large?  These are
>>> not
>>>>>>>> theoretical problems.
>>>>>>>>
>>>>>>>> 2. Maintainability.  Is this the simplest way to accomplish your
>>>>>> goals?  Is
>>>>>>>> there a method in SSTableReader that would make your life easier if
>>>> you
>>>>>>>> refactored it a bit instead of reinventing it?  Does this reduce
>>>>>> technical
>>>>>>>> debt or add to it?
>>>>>>>>
>>>>>>>> 3. "Bus factor."  If only the author understands the code and all
>>>>>> anyone
>>>>>>>> else knows is that tests pass, the project will quickly be in bad
>>>>>> shape.
>>>>>>>> Review should ensure that at least one other person understand the
>>>>>> code,
>>>>>>>> what it does, and why, at a level that s/he could fix bugs later in
>>>> it
>>>>>> if
>>>>>>>> necessary.
>>>>>>>>
>>>>>>>> On Wed, Oct 19, 2016 at 10:52 AM, Jonathan Haddad <
>>> j...@jonhaddad.com
>>>>>
>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Shouldn't the tests test the code for correctness?
>>>>>>>>>
>>>>>>>>> On Wed, Oct 19, 2016 at 8:34 AM Jonathan Ellis <jbel...@gmail.com
>>>>
>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> On Wed, Oct 19, 2016 at 8:27 AM, Benjamin Lerer <
>>>>>>>>>> benjamin.le...@datastax.com
>>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Having the test passing does not mean that a patch is fine.
>>> Which
>>>> is
>>>>>>>>> why
>>>>>>>>>> we
>>>>>>>>>>> have a review check list.
>>>>>>>>>>> I never put a patch available without having the tests passing
>>> but
>>>>>> most
>>>>>>>>>> of
>>>>>>>>>>> my patches never pass on the first try. We always make mistakes
>>> no
>>>>>>>>> matter
>>>>>>>>>>> how hard we try.
>>>>>>>>>>> The reviewer job is to catch those mistakes by looking at the
>>>> patch
>>>>>>>>> from
>>>>>>>>>>> another angle. Of course, sometime, both of them fail.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Agreed.  Review should not just be a "tests pass, +1" rubber
>>> stamp,
>>>>>> but
>>>>>>>>>> actually checking the code for correctness.  The former is just
>>>>>> process;
>>>>>>>>>> the latter actually catches problems that the tests would not.
>>>> (And
>>>>>> this
>>>>>>>>>> is true even if the tests are much much better than ours.)
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Jonathan Ellis
>>>>>>>>>> co-founder, https://urldefense.proofpoint.
>>>>>> com/v2/url?u=http-3A__www.datastax.com&d=DQIFaQ&c=08AGY6txKs
>>>>>> vMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=yfYEBHVkX6l0zImlOIBID
>>>>>> 0gmhluYPD5Jje-3CtaT3ow&m=eXI0TPp0DM06kmTuJQRNcUX5zy_O_KhWcDK
>>>>>> MA-jxww0&s=D28xk3JpCIOAQnCXJAVky0lJJv_mPnYwy4gKxLKsSqw&e=
>>>>>>>>>> @spyced
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Jonathan Ellis
>>>>>>>> co-founder, https://urldefense.proofpoint.
>>>>>> com/v2/url?u=http-3A__www.datastax.com&d=DQIFaQ&c=08AGY6txKs
>>>>>> vMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=yfYEBHVkX6l0zImlOIBID
>>>>>> 0gmhluYPD5Jje-3CtaT3ow&m=eXI0TPp0DM06kmTuJQRNcUX5zy_O_KhWcDK
>>>>>> MA-jxww0&s=D28xk3JpCIOAQnCXJAVky0lJJv_mPnYwy4gKxLKsSqw&e=
>>>>>>>> @spyced
>>>>>>>
>>>>>>
>>>>>> ____________________________________________________________________
>>>>>> CONFIDENTIALITY NOTE: This e-mail and any attachments are confidential
>>>>>> and may be legally privileged. If you are not the intended recipient,
>>> do
>>>>>> not disclose, copy, distribute, or use this email or any attachments.
>>> If
>>>>>> you have received this in error please let the sender know and then
>>>> delete
>>>>>> the email and all attachments.
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Sorry this was sent from mobile. Will do less grammar and spell check
>>>> than
>>>>> usual.
>>>>>
>>>>
>>>>
>>>> --
>>>> Sorry this was sent from mobile. Will do less grammar and spell check
>>> than
>>>> usual.
>>>>
>>>
>

Reply via email to