On 6/24/2013 8:02 PM, Justin Lebar wrote:

Under what circumstances would you expect the code coverage build to break but all our other builds to remain green?

Sorry, I should have been more clear. For builds, I think it would be pretty unusual for them to break on code coverage and yet remain green on non-coverage builds. But I've seen tests do wild things with code coverage enabled because timing changes so much. The most worrisome issues are when tests cause crashes on coverage enabled builds, and that's what I'm looking for help in tracking down and fixing. Oranges on coverage enabled builds I can live with (because they don't change coverage numbers in large ways and can even point us at timing dependent tests, which could be a good thing in the long run), but crashes effectively prevent us from measuring coverage for that test suite/chunk. Test crashes were one of the big issues with the old system -- we could never get eyes on the crashes to debug what had happened and get it fixed.

Thanks,
Clint
On Jun 24, 2013 6:51 PM, "Clint Talbert" <ctalb...@mozilla.com <mailto:ctalb...@mozilla.com>> wrote:

    Decoder and Jcranmer got code coverage working on Try[1]. They'd
    like to expand this into something that runs automatically,
    generating results over time so that we can actually know what our
    code coverage status is with our major run-on-checkin test
    harnesses.  While both Joduinn and I are happy to turn this on, we
    have been down this road before. We got code coverage stood up in
    2008, ran it for a while, but when it became unusable and fell
    apart, we were left with no options but to turn it off.

    Now, Jcranmer and Decoder's work is of far higher quality than
    that old run, but before we invest the work in automating it, I
    want to know if this is going to be useful and whether or not I
    can depend on the community of platform developers to address
    inevitable issues where some checkin, somewhere breaks the code
    coverage build.  Do we have your support?  Will you find the
    generated data useful?  I know I certainly would, but I need more
    buy-in than that (I can just use try if I'm the only one concerned
    about it). Let me know your thoughts on measuring code coverage
    and owning breakages to the code coverage builds.

    Also, what do people think about standing up JSLint as well (in a
    separate automation job)?  We should treat these as two entirely
    separate things, but if that would be useful, we can look into
    that as well.  We can configure the rules around JSLint to be
    amenable to our practices and simply enforce against specific
    errors we don't want in our JS code.  If the JS style flamewars
    start-up, I'll split this question into its own thread because
    they are irrelevant to my objective here.  I want to know if it
    would be useful to have something like this for JS or not.  If we
    do decide to use something like JSLint, then I will be happy to
    facilitate JS-Style flamewars because they will then be relevant
    to defining what we want Lint to do but until that decision is
    made, let's hold them in check.

    So, the key things I want to know:
    * Will you support code coverage? Would it be useful to your work
    to have a regularly scheduled code coverage build & test run?
    * Would you want to additionally consider using something like
    JS-Lint for our codebase?

    Let me know,

    Clint

    [1]
    https://developer.mozilla.org/en-US/docs/Measuring_Code_Coverage_on_Firefox
    _______________________________________________
    dev-platform mailing list
    dev-platform@lists.mozilla.org <mailto:dev-platform@lists.mozilla.org>
    https://lists.mozilla.org/listinfo/dev-platform


_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to