In general I agree.

I’m an author of the PR that triggered this thread. That PR specifically prints 
new information to help diagnose an intermittent CI failure in a 
DistributedTest test, which we have been unable to reproduce outside of CI. Our 
working hypothesis is that an action initiated at the end of a test sometimes 
continues beyond the end of the test, interfering with the next test. The 
information we’re printing is designed to help test our hypothesis.

In another case, I added printouts to report progress through a multi-threaded 
“unit” test, where the test establishes a specific interleaving between 
threads. That test also was difficult to reproduce outside of CI. We’re 
confident that we’ve fixed the original problem in the test, but if we’re 
wrong, the printouts give essential information to whoever has to diagnose the 
next failure in CI.

In each of these cases, we originally used logging to write the new 
information. In response to Galen’s comments and other advice in this thread, 
we’ve switched to printouts, to reduce the tests’ unnecessary dependence on 
product code that isn’t relevant to the test.

All of that said, I also strongly prefer tests that are small enough, focused 
enough, and clear enough that logs/printouts are unnecessary.

—
Dale Emery
dem...@pivotal.io

> On Sep 21, 2018, at 10:34 AM, Kirk Lund <kl...@apache.org> wrote:
> 
> Most of these logWriter or logger usages are in larger end-to-end tests
> that were written before we could using IDE debuggers on our tests. With a
> debugger, I don't want to see more output from the test so I tend to delete
> all such System.out.printlns or LogWriter/Logger usage.
> 
> My recommendation is to delete all extra System.out.printlns AND
> LogWriter/Logger usages unless it's part of the test itself. If you think
> the line is useful, then would it be sufficient as a comment? If not then
> maybe the test is too long, doing too much and should be exploded into
> several easier to debug test methods with more narrow focus.
> 
> Remember, a single test that does a lot and has tons of assertions is an
> anti-pattern. It's better to split that test out into many tests such that
> by seeing that the test failed, you either know exactly what broke without
> digging too much or at least have a pretty good idea where to dig. Sure,
> there's a balance to strike especially in end-to-end tests but if the test
> method scrolls along for many screens, then most likely it's doing too much
> and the author(s) added logging to compensate for this.
> 
> On Thu, Sep 20, 2018 at 2:10 PM, Darrel Schneider <dschnei...@pivotal.io>
> wrote:
> 
>> For simple single threaded tests System.out would do the job.
>> For a multi-threaded test I have found the logging framework to be helpful
>> because of the thread id and the timestamps.
>> 
>> 
>> On Thu, Sep 20, 2018 at 1:50 PM Dale Emery <dem...@pivotal.io> wrote:
>> 
>>> As long as the stdout is available in the test results, I’m more than
>>> happy to avoid coupling the tests to the product logging code.
>>> 
>>>> On Sep 20, 2018, at 1:39 PM, Galen O'Sullivan <gosulli...@pivotal.io>
>>> wrote:
>>>> 
>>>> I was reviewing a PR recently and noticed that we have some test code
>>> that
>>>> uses Logger (or LogWriter). If I understand correctly, anything logged
>> to
>>>> stdout will be included in the test output, and anything logged in a
>>> DUnit
>>>> VM will be logged with the appropriate VM number prepended in the
>> output.
>>>> Because logging is coupled to product code, I propose we log all test
>>>> output to standard out (using System.out.println or such). I also
>> propose
>>>> we add this to the Geode style guide.
>>>> 
>>>> Thoughts/questions/objections?
>>>> 
>>>> Thanks,
>>>> Galen
>>> 
>>> 
>>> —
>>> Dale Emery
>>> dem...@pivotal.io
>>> 
>>> 
>> 

Reply via email to