[Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Stéfan van der Walt
Tue May 20 06:26:34 CDT 2008
2008/5/20 Pearu Peterson <firstname.lastname@example.org>:
>> 7. If you fix a bug, it must have tests.
> Since you asked, I have a problem with the rule 7 when applying
> it to packages like numpy.distutils and numpy.f2py, for instance.
> Do you realize that there exists bugs/features for which unittests cannot
> be written in principle? An example: say, a compiler vendor changes
> a flag of the new version of the compiler so that numpy.distutils
> is not able to detect the compiler or it uses wrong flags for the
> new compiler when compiling sources. Often, the required fix
> is trivial to find and apply, also just reading the code one can
> easily verify that the patch does not break anything. However, to
> write a unittest covering such a change would mean that one needs
> to ship also the corresponding compiler to the unittest directory.
> This is nonsense, of course. I can find other similar examples
> that have needed attention and changes to numpy.distutils and
> numpy.f2py in past and I know that few are coming up.
My earlier message, regarding your patch to trunk, was maybe not
clearly phrased. I didn't want to start a unit-testing discussion,
and was simply trying to say "if we apply patches now, we should make
sure they work". You did that, so I was happy. I've been the main
transgressor in applying new changes to trunk; sorry about the
As for your comment above: yes, writing unit tests is hard. As you
mentioned, sometimes changes are trivial, difficult to test and you
can see they work. If the code was already exercised in the test
suite, I would be less worried about such trivial changes. Then, at
least, we know that the code is executed every time the test suite
runs, so if a person forgot to close a parenthesis or a string quote,
it would be picked up.
The case I describe above is exceptional (but it does occur much more
frequently in f2py and distutils). Still, I would not say that those
tests are impossible to write, just that they require thinking outside
the box. For example, it would be quite feasible to have a set of
small Python scripts that pretend to be compilers, to assist in
asserting that the flags we think we pass are the ones that reach the
compiler. Similarly, a fake directory tree can be created to help
verify that `site.cfg` is correctly parsed and applied. You are
right: some factors are out of our control, but we need to test for
every piece of functionality that isn't.
More information about the Numpy-discussion