Alan Gauld wrote:

The basic idea in testing is to try to break your code. Try to think
of every kind of evil input that could possibly come along and see
if your code survives. In amongst all of that you should have a
some valid values too, and know what to expect as out put.

Testing is more than just trying to break code of course. I'm reminded of a quote:

"I find it amusing when novice programmers believe their main job is
preventing programs from crashing. ... More experienced programmers
realize that correct code is great, code that crashes could use
improvement, but incorrect code that doesn't crash is a horrible
nightmare." -- Chris Smith

Testing is an attempt to ensure that code is *correct* -- that it does what it is supposed to do.


There are a number of different types of tests, with different purposes.

Unit tests are for testing that code behaves as expected. If you give the function this input, this will be its result. (The result might be to return a value, or it might be to raise an exception.) If you have a function that's supposed to add two numbers, it's important to be sure that it actually, ya know, *adds two numbers*, and not something else. ("It *almost* adds them, it's just that sometimes the answer is off by one or two...")

Doctests are examples that you put into the documentation (usually into docstrings of functions and methods). Their primary purpose is to be examples for the reader to read, but the secondary aspect is that you can run the examples and be sure that they actually work as they are supposed to work.

Regression tests are to prevent bugs from re-occurring. For example, suppose you have a function spam() and you discover it fails for one particular input, "aardvark". The first thing you should do, before even fixing the bug, is write a test:

assert spam("aardvark") == "expected result"

This test will fail, because there's a bug in spam(). Now go ahead and fix the bug, and the test will then pass. If your code ever has a *regression* that returns the bug in spam(), the test will fail again and you will immediately notice. Regression tests are to prevent fixed bugs from returning.

User Acceptance Tests are mostly relevant when you're building software for a client. Both parties need a way to know when the software is "done". Of course software is never done, but if you're charging $10000 for a software project, you need to have way of saying "This is where we stop, if you want us to continue, it will cost you more". UATs give both parties an objective way of telling that the promised functionality is there (e.g. "if the user clicks the Stop button, processing must stop within one second") and identifying bugs and/or areas that weren't specified in enough detail.

Of course, all of these are fuzzy categories -- there's no hard line between them.

Test Driven Development tends to focus on the more normal inputs
in my experience, but done properly your test code will usually
be bigger than your production code. In a recent project that we
completed we had 600k lines of production code and over a
million lines of test code.

I can well believe that. I've had a look at a couple of my projects (*much* smaller than 600 KLOC!) and I'm averaging about 800 lines in the test suite per 1000 lines in the project. That's an over-estimate: the 1000 lines includes doc tests, so should be counted towards the tests. Taking that into account, I get (very roughly) 1:1 ratio of test code to production code.


And we still wound up with over 50 reported bugs during Beta test...
But that was much better than the 2000 bugs on an earlier project :-)
But testing is hard.

Maybe so, but nothing beats running your test suite and seeing everything pass!



--
Steven

_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor

Reply via email to