On Fri, Nov 26, 2010 at 10:38:44AM -0500, Zach Mullen wrote: > I just realized why this isn't working -- it's actually not a regression.
Maybe we have different definitions of "regression". I see a feature that used to do one thing but which now does something else. Here is what the docs say about the COST property: # COST: Set this to a floating point value. Tests in a test set will be # run in descending order of cost. This property describes the cost of a test. You can explicitly set this value; tests with higher COST values will run first. I don't see anything there about parallel or non-parallel runs. It seems to me that if I set the COST property, I should be able to control the order in which tests run, period. So at the very least, the docs should be updated if you intend to change the behavior. > In this release we decided that the costs should only be taken into account > in a parallel case (ctest -j N). Many users have implicit dependencies > based on the order of their add_test calls, so we didn't want to break > backward compatibility for those not using parallel ctest. It looks like ctest -j2 is respecting COST. Currently I have several tests that cannot run at the same time as others (they touch the same resources and/or running two of them at once would crush the machine). If I could get the old COST behavior by running ctest -j1, that might be an acceptable workaround, but it does not appear to work today. > The non-parallel way to specify a test to run last is simply to make it the > last add_test call. My CMake projects are modular (I imagine this is true for many CMake users). Each module is responsible for adding its own unit tests and code quality checks. As I said in my initial email, the code quality checks must run after the unit tests so that accurate code coverage values can be calculated. I can try to insure that my add_unittest() functions all run before my add_code_quality() functions, but that seems brittle and error-prone. It was much nicer when I could just tell add_code_quality() to add all its tests with COST -1000 to guarantee they run after everything else. I can imagine ways to work around this problem, but they all seem rather clunky, especially when COST used to solve the problem so simply and elegantly. I hope we can reach a useful middle ground about the future of the COST property. In its current state, it is of no use to me. Thanks, tyler > On Fri, Nov 26, 2010 at 10:20 AM, Zach Mullen <zach.mul...@kitware.com>wrote: > > On Tue, Nov 23, 2010 at 6:02 PM, David Cole <david.c...@kitware.com>wrote: > > > >> It might be due to this commit: > >> > >> http://cmake.org/gitweb?p=cmake.git;a=commitdiff;h=142edf8ad4baccd991a6a8a3e5283d0b575acca2 > >> (first released in 2.8.3) > >> > >> Or this one: > >> > >> http://cmake.org/gitweb?p=cmake.git;a=commitdiff;h=b4d27dc041c9164d6f3ad39e192f4b7d116ca3b3 > >> (first released in 2.8.2) > >> > >> Either way, seems like a bug to me. If you explicitly specify a COST > >> property value, especially a negative one to induce "last run" status, > >> then it should be honored over either historical average measurement > >> or "failed last time, so run it first this time" behavior. _______________________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Follow this link to subscribe/unsubscribe: http://www.cmake.org/mailman/listinfo/cmake