http://src.chromium.org/viewvc/chrome/trunk/tools/buildbot/scripts/master/log_parser/gtest_command.py?revision=28463&view=markup
has
some logic for that but doesn't seem to work anymore. Nicolas is on
vacation, I'll take a look soon: http://crbug.com/30599

M-A

On Fri, Dec 18, 2009 at 1:47 AM, Paweł Hajdan, Jr.
<[email protected]>wrote:

> Looks like the browser_tests launcher needs to be updated, because its
> exit code was 1 (that's why the bot went red).
>
> On Fri, Dec 18, 2009 at 04:33, Lei Zhang <[email protected]> wrote:
> > Does FLAKY_ work on all tests? The test run [1] for my most recent
> > check-in turned a bot red, even though the only test that failed is
> > one marked FLAKY_.
> >
> > [red] browser_tests [browser_tests 896 flaky did not complete crashed
> > or hung] [197 seconds] [/red]
> >
> > and in the log; [2]
> >
> > ...
> > Note: Google Test filter = AutocompleteBrowserTest.YOU HAVE 8 FLAKY TESTS
> > [==========] Running 0 tests from 0 test cases.
> > [==========] 0 tests from 0 test cases ran. (0 ms total)
> > [  PASSED  ] 0 tests.
> >  YOU HAVE 8 FLAKY TESTS
> >
> > 113 tests run
> > 1 test failed
> > Failing tests:
> > ExtensionBrowserTest.FLAKY_AutoUpdate
> > program finished with exit code 1
> > elapsedTime=197.683425
> >
> >
> > [1]
> http://build.chromium.org/buildbot/waterfall/builders/Linux%20Builder%20(Views%20dbg)/builds/5784
> > [2]
> http://build.chromium.org/buildbot/waterfall/builders/Linux%20Builder%20(Views%20dbg)/builds/5784/steps/browser_tests/logs/stdio
> >
> > On Tue, Oct 6, 2009 at 4:02 PM, Nicolas Sylvain <[email protected]>
> wrote:
> >> Hello,
> >> We currently have more than 50 unit tests that are disabled. Most of
> them
> >> because they were flaky.
> >> Disabling tests is bad because we lose complete coverage on them, so I
> >> implemented a way to mark
> >> tests as "flaky".
> >> The same way you disable a test with DISABLED_ at the beginning of its
> name,
> >> you can now mark
> >> is as flaky with FLAKY_.  The behavior is exactly the same as any other
> >> running tests. You will still
> >> be able to see when it fails (and why).  The only difference is that if
> only
> >> FLAKY_ tests failed, the
> >> buildbot/trybots won't consider it as a failure. On the waterfall, it
> will
> >> show the box as orange with the
> >> list of all flaky tests that failed (pending one more buildbot restart).
> On
> >> the console view it will stay
> >> green.
> >> But.. this is not a toy. Flaky tests are bad. We should mark tests flaky
> >> only if we really have to, and
> >> if you do, please make sure to file a P1 bug. Set the owner of the bug
> to
> >> whoever regressed the test.
> >> If you can't find who regressed the test, assign it to the person who
> >> originally wrote the test.
> >> Once we start tagging the flaky tests, we will monitor the flakiness
> >> dashboard and make sure
> >> that a test that is no longer flaky has its FLAKY_ tag removed.
> >> Let me know if you have questions.
> >> Thanks
> >> Nicolas
> >>
> >> --~--~---------~--~----~------------~-------~--~----~
> >> Chromium Developers mailing list: [email protected]
> >> View archives, change email options, or unsubscribe:
> >>     http://groups.google.com/group/chromium-dev
> >>
> >> -~----------~----~----~----~------~----~------~--~---
> >>
> >>
> >
> > --
> > Chromium Developers mailing list: [email protected]
> > View archives, change email options, or unsubscribe:
> >    http://groups.google.com/group/chromium-dev
> >
>
> --
> Chromium Developers mailing list: [email protected]
> View archives, change email options, or unsubscribe:
>    http://groups.google.com/group/chromium-dev
>

-- 
Chromium Developers mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev

Reply via email to