[SciPy-user] Article(s) on Test Driven Development for Science
josef.pktd@gmai...
josef.pktd@gmai...
Thu Mar 12 12:02:16 CDT 2009
On Thu, Mar 12, 2009 at 9:13 AM, Timmie <timmichelsen@gmx-topmail.de> wrote:
> Hello,
> from many blogs, in the book "Expert Python Programming" and in the code I read
> about test driven development.
> It's said to help preventing future breakage or failures of the code.
>
> Is see on major diffuculty when adopting this for science scripts:
>
> In Science we not only have to control the program flow but also to validate the
> output.
> I think such a validation needs to be included in tests.
>
> I may change something in my code and still pass tests from the software side
> but the result data may be totally wrong.
>
> Are there already concepts for such testing?
>
> The tests I have seen so far play mostely with generated random data.
>
> Are there presentations or articles about this?
>
> Regards,
> Timmie
>
I don't think validating the results needs much of a special
discussion besides the regular testing tools, and it would be very
field specific.
For example in stats, and similar in my other work, I use four types
of (unit) tests:
* validating special cases, where I know what the right results are
supposed to be. This is usually my first step to get the basic
mistakes fixed.
* comparison with other implementation: often I have several
implementation available to calculate the same results, e.g. in
stats.distributions, numerical integration versus explicit formula, or
unoptimized version of a function with loops and simple structure,
second version with optimized matrix algebra.
* comparison with validated numbers: e.g. comparing with results from
R, from publications or certified examples as the ones from NIST,
* using theoretical properties: the random tests in stats are based on
the statistical properties of the statistic, distribution or
estimator, either from the definition or, for example, from the law of
large numbers. If I can simulate a large enough sample or run a Monte
Carlo with enough replications, I can test that the computed results
correspond to the theoretical results.
To simplify the actual tests, I also use regression tests after
verifying the results, but regression tests don't validate the results
if they were wrong in the first place.
For big models, I have often found nothing better than visual
inspection and relying on intuition whether it looks correct. I try to
verify the individual pieces with unit tests, but whether everything
works correctly together, I don't have formally tested.
So my impression is that, validating the results in tests should just
be part of the regular testing strategy, which should not be
restricted to tests that verify whether the function runs and the
result has the correct shape and type.
Josef
More information about the SciPy-user
mailing list