On Jun 10, 2012, at 9:26 AM, Ojan Vafai <[email protected]> wrote:

> On Sun, Jun 10, 2012 at 4:54 AM, Balazs Kelemen <[email protected]> wrote:
>> So the unit tests are superfluous.  In particular, if I had to pick between 
>> only having unit tests or only having regression tests, I might pick unit 
>> tests.  But if I already have regression tests then I'm unlikely to want to 
>> incur technical debt to build unit tests, particularly since unit tests 
>> requiring changing the infrastructure to make the code more testable, which 
>> then leads to the problems listed above.
>> 
>> There are many code paths are used rarely. In practice, we were having 
>> regressions frequently when people modified the code. Since the codebase has 
>> been unittested, the rate of regressions has gone down considerably. The 
>> time you spend dealing with tests is considerably less than the time you 
>> spend rolling patches in an out as you encounter different edge cases that 
>> different configurations/flags hit.
>>  
> 
> A quick note to unittests. I think it's easy to define a hard limit for 
> unittests, which is that: if I want to add a feature, or some customizing 
> option for a particular port, it should be less effort to write the unittest 
> than to write the actual code. I heard from my colleges a few times that it's 
> not always the case with nrwt. I can imagine that it's not trivial to setup 
> the unittest system for a module that has not been unittested so far but I 
> think it should rather be the job of those who are actively working on the 
> test harness, not of those who just need some work to be done for their port.
> 
> While this is a nice ideal to strive for, I don't think this ever plays out 
> for testing on any project, e.g. it is very frequently harder to write tests 
> for my WebCore changes than to make the change itself. Certainly anything we 
> can do to make testing easier is better, but I don't see NRWT as more 
> difficult to test than any other code in the WebKit project.
> 
> WebKit has a policy of every change requiring tests. I don't see why tooling 
> should be any different. It's unfortunate that NRWT started with 0 tests, so 
> there are still (very few now!) parts that aren't tested. It's hard to test 
> those parts if that's what your modifying. However, it's *especially* for the 
> cases of port-specific code that need testing. Those are exactly the 
> codepaths that break from lack of testing.

Do we have some data that shows NRWT suffering fewer regressions (per unit time 
or per N changes) than ORWT?

I am strongly in favor of automated tests in general, but I'm skeptical of it 
here for two reasons:

1) I have found the hackability of anything involving webkitpy and its unit 
tests to be poor. It takes a long time to make a simple change, and the need to 
add tests or modify tests is certainly part of it.

2) For code that ships to end-users or third parties, I am a strong advocate of 
comprehensive testing. I think testing is worthwhile even if it were 
hypothetically the case that faith-based programming was less total work. That 
is so because we are trading off the time of a couple of hundred WebKit 
engineers for quality of software experienced by hundreds of millions of users. 
So it's worth it to incur significant test infrastructure costs to benefit a 
much greater number of users.

But for the case of internal tools, I think the tradeoff is fundamentally 
different. The costs of maintaining test infrastructure and the costs of 
dealing with regressions are borne by more or less the same set of people. So 
if the work to maintain unit tests is greater than the cost of just dealing 
with whatever regressions slip through, then it's probably not worth it.

My own gut feeling is that ORWT never experienced enough regressions to justify 
the cost of a unit testing system.

Regards,
Maciej



_______________________________________________
webkit-dev mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

Reply via email to