On 02.05.2007, at 02:32, Marius Mauch wrote:
a) cost (in terms of runtime, resource usage, additional deps)
Tools for this could be implemented in the package manager. The  
package has to be installed and tested by the developer, so if  
portage would show the times for each stage or the times for the test  
and the rest or something like that, the developer could get an idea:  
If test time is smaller than build time (or less than half of  
complete time), than it's not that much cost. It test time is less  
then 1 hour (or whatever), than it's not that much cost. In any other  
case it's much cost.
b) effectiveness (does a failing/working test mean the package is
broken/working?)
To figure this out before releasing a package to the tree might be  
lots of work. so this could be figured out later. If there are bugs  
about tests failing, try to reproduce it or ask the reporter to do  
some tests if everything is working as expected.
c) importance (is there a realistic chance for the test to be useful?)
This can easily be decided, as mentioned in other posts (scientific  
packages, core packages, cryptographic packages,...)
d) correctness (does the test match the implementation? overlaps a bit
with effectiveness)
This might be a lot of work. I think this cannot be tested in a sane  
way for every package. So it's probably up to the maintainer/herd or  
upstream to decide if he/they sould take care of this
Philipp


--
[EMAIL PROTECTED] mailing list

Reply via email to