[SciPy-User] peer review of scientific software
Thu Jun 6 11:06:38 CDT 2013
> On the other hand that neatly points out the problem that the user
> would be unlikely to guess that sparse would only work correctly for
Sorry, I guess the arguments got a bit twisted.
I'm not arguing against unit tests. The point of Matt and me was that
functional testing and a large user base is important, and that we can
rely on packages that have those as users (for the usual usage).
When I use Stata or numpy, then I don't check whether the mean of 100
variables is correctly calculated (unless my numbers go from 1e-30 to
1e+30). When I use pandas, then I quickly realize that ddof=1 for
standard deviation and I cannot use it as plug-in for numpy.
scipy is a library that needs unit tests:
When I started with scipy 5 years ago, there were huge gaps in test
coverage in many sub-packages, especially in the less popular areas.
There was not even a minimal test coverage for some modules or
functions, and I wouldn't trust any of those results.
Bugs have mainly been fixed in response to bug reports.
So, the popular functions were pretty safe and bugfree. Everything
else was a gamble.
I think all the major gaps in test coverage have been closed by now.
for example linalg and fftpack were always pretty good (based on
underlying libraries and heavy usage)
special, signal, and stats got lot's of attention (I'm not sure how
far signal is, stats still has some problems)
ndimage got a partial makeover, and might still have rough edges
sparse got partial improvement and is on the schedule for this year
optimize got a refactoring, but there are still problems in some
algorithms that are mainly found by functional testing.
interpolate a mixed bag, and splines are a bit messy.
integrate: I don't remember any problems there
maxentropy: removed because of lack of users and maintainers
I don't know much about the other ones.
Now, there are very few bugs that show up in scipy.stats that I feel
urgent enough to prepare a pull request myself.
The last pull requests of mine that I merged into statsmodels had
around 95% test coverage, almost all verified against other
statistical packages, but no unit tests for dtypes, pandas dataframes
or anything "weird".
(I will need to go back and add tests for pandas dataframes.)
More information about the SciPy-User