[SciPy-Dev] Scipy 1.0 roadmap
Sat Sep 21 13:57:06 CDT 2013
On Sat, Sep 21, 2013 at 8:54 PM, Ralf Gommers <firstname.lastname@example.org>wrote:
> Hi all,
> At EuroScipy Pauli, David and I sat together and drafted a roadmap for
> Scipy 1.0. We then discussed this offline with some of the other currently
> most active core devs, to get it into a state that's ready for discussion
> on this list. So here it is: https://github.com/scipy/scipy/pull/2908
> Our aim is for this roadmap to help guide us towards a 1.0 version, which
> will contain only code that we consider to be "of sufficient quality".
> Also, it will help to communicate to new and potential developers where
> their contributions are especially needed.
> In order to discuss/review this roadmap without generating a monster
> thread, I propose the following:
> - topics like "do we need a roadmap?" or "what does 1.0-ready really
> mean?" are discussed on this thread.
> - things in the General section (API changes, documentation/test/build
> guidelines, etc.), are discussed on this thread as well.
> - for discussion of module-specific content, start a new thread and name
> it "1.0 roadmap: <module_name>".
> - for minor things, comment on the PR.
Github may not survive forever, so for the record here the full text of the
Roadmap to Scipy 1.0
This roadmap provides a high-level view on what is needed per scipy
submodule in terms of new functionality, bug fixes, etc. before we can
a ``1.0`` version of Scipy. Things not mentioned in this roadmap are
not necessarily unimportant or out of scope, however we (the Scipy
want to provide to our users and contributors a clear picture of where
going and where help is needed most urgently.
When a module is in a 1.0-ready state, it means that it has the
we consider essential and has an API and code quality (including
and tests) that's of high enough quality.
This roadmap will be evolving together with Scipy. Updates can be
pull requests and, unless they're very minor, have to be discussed on the
scipy-dev mailing list.
In general, we want to take advantage of the major version change to fix the
known warts in the API. The change from 0.x.x to 1.x.x is the chance to fix
those API issues that we all know are ugly warts. Example: unify the
convention for specifying tolerances (including absolute, relative, argument
and function value tolerances) of the optimization functions. More API
will be noted in the module sections below.
It should be made more clear what is public and what is private in scipy.
Everything private should be underscored as much as possible. Now this is
consistently when we add new code, but for 1.0 it should also be done for
Test coverage of code added in the last few years is quite good, and we aim
a high coverage for all new code that is added. However, there is still a
significant amount of old code for which coverage is poor. Bringing that
the current standard is probably not realistic, but we should plug the
holes. Additionally the coverage should be tracked over time and we should
ensure it only goes up.
Besides coverage there is also the issue of correctness - older code may
few tests that provide decent statement coverage, but that doesn't
say much about whether the code does what it says on the box. Therefore
review of some parts of the code (``stats`` and ``signal`` in particular) is
The documentation is in decent shape. Expanding of current docstrings and
putting them in the standard numpy format should continue, so the number of
reST errors and glitches in the html docs decreases. Most modules also
tutorial in the reference guide that is a good introduction, however there
a few missing or incomplete tutorials - this should be fixed.
Scipy 1.0 will likely contain more backwards-incompatible changes than a
release. Therefore we will have a longer-lived maintenance branch of the
It's not clear how much functionality can be Cythonized without making the
files too large. This needs measuring.
Bento will be officially supported as the second build tool besides
At the moment it still has an experimental, use-at-your-own-risk status, but
that has to change.
A more complete continuous integration setup is needed; at the moment we
find out right before a release that there are issues on some less-often
platform or Python version. At least needed are a Windows, Linux and OS X
build, coverage of the lowest and highest Python and Numpy versions that are
supported, a Bento build and a PEP8 checker.
Most of the cluster module is a candidate for a Cython rewrite; this will
up the code and it will be more maintainable than the current C code. The
should remain (or become) simple and easy to understand. Support for the
arbitrary distance metrics in ``scipy.spatial`` is probably best left to
scikit-learn or other more specialized libraries.
This module is basically done, low-maintenance and without open issues.
- solve issues with single precision: large errors, disabled for
- fix caching bug
- Bluestein algorithm nice to have, padding is alternative
- deprecate fftpack.convolve as public function (was not meant to be
resolve differences between ``signal.fftconvolve`` /
``signal.convolve`` and ``numpy.convolve``
There's a large overlap with ``numpy.fft``. This duplication has to change
(both are too widely used to deprecate one); in the documentation we should
make clear that ``scipy.fftpack`` is preferred over ``numpy.fft``.
Needed for ODE solvers:
- documentation is pretty bad, needs fixing
- figure out if/how to integrate scikits.odes (Sundials wrapper)
- figure out what to deprecate
The numerical integration functions are in good shape, not much to do here.
- Transparant B-splines and their usage in the interpolation routines is
- Both fitpack and fitpack2 interfaces will be kept.
- splmake should go; is different spline representation --> need exactly
- interp1d/interp2d are somewhat ugly but widely used, so we keep them.
- Regular grid interpolation routines needed
- PCM float will be supported, for anything else use audiolab or other
- raise errors instead of warnings if data not understood.
Other sub-modules (matlab, netcdf, idl, harwell-boeing, arff, matrix market)
are in good shape.
``scipy.lib`` contains nothing public anymore, so rename to ``scipy._lib``.
- remove functions that are duplicate with numpy.linalg
- get_lapack_funcs should always use flapack
- cblas, clapack are deprecated, will go away
- wrap more lapack functions
- one too many funcs for LU decomposition, remove one
``scipy.misc`` will be removed as a public module. The functions in it can
moved to other modules:
- pilutil, images : ndimage
- comb, factorials, logsumexp, pade : special
- doccer : move to scipy._lib
- info, who : these are in numpy
- derivative, central_diff_weight : remove, replace with more extensive
functionality for numerical differentiation - likely in a new module
``scipy.diff``, as discussed in
Underlying ndimage is a powerful interpolation engine. Unfortunately, it
never decided whether to use a pixel model (``(1, 1)`` elements with centers
``(0.5, 0.5)``) or a data point model (values at points on a grid). Over
it seems that the data point model is better defined and easier to
We therefore propose to move to this data representation for 1.0, and to vet
all interpolation code to ensure that boundary values, transformations, etc.
are correctly computed. Addressing this issue will close several issues,
including #1323, #1903, #2045 and #2640.
Rename the module to ``regression`` or ``fitting``, include
``optimize.curve_fit``. This module will then provide a home for other
functionality - what exactly needs to be worked out in more detail, a
discussion can be found at https://github.com/scipy/scipy/pull/448.
Overall this module is in reasonably good shape, however it is missing a few
more good global optimizers as well as large-scale optimizers. These
added. Other things that are needed:
- deprecate ``anneal``, it just doesn't work well enough.
- deprecate the ``fmin_*`` functions in the documentation, ``minimize`` is
- clearly define what's out of scope for this module.
*Convolution and correlation*: (Relevant functions are convolve, correlate,
fftconvolve, convolve2d, correlate2d, and sepfir2d.) Eliminate the overlap
`ndimage` (and elsewhere). From `numpy`, `scipy.signal` and `scipy.ndimage`
(and anywhere else we find them), pick the "best of class" for 1-D, 2-D and
convolution and correlation, put the implementation somewhere, and use that
consistently throughout scipy.
*B-splines*: (Relevant functions are bspline, cubic, quadratic,
cspline1d, qspline1d, cspline2d, qspline2d, cspline1d_eval, and
Move the good stuff to `interpolate` (with appropriate API changes to match
things are done in `interpolate`), and eliminate any duplication.
*Filter design*: merge `firwin` and `firwin2` so `firwin2` can be removed.
*Continuous-Time Linear Systems*: remove `lsim2`, `impulse2`, `step2`. Make
`lsim`, `impulse` and `step` "just work" for any input system. Improve
performance of ltisys (less internal transformations between different
*Wavelets*: add proper wavelets, including discrete wavelet transform.
there now doesn't make much sense.
The sparse matrix formats are getting feature-complete but are slow ...
reimplement parts in Cython?
- Small matrices are slower than PySparse, needs fixing
There are a lot of formats. These should all be kept, but
improvements/optimizations should go into CSR/CSC, which are the preferred
Don't emulate np.matrix behavior, drop 2-D?
This module is in good shape.
Arpack is in good shape.
- callback keyword is inconsistent
- tol keyword is broken, should be relative tol
- Fortran code not re-entrant (but we don't solve, maybe re-use from
- remove umfpack wrapper due to license reasons
- add sparse Cholesky or incomplete Cholesky
- look at CHOLMOD
KDTree/cKDTree and the QHull wrappers are in good shape. The distance
needs bug fixes in the distance metrics, and distance_wrap.c needs to be
cleaned up (maybe rewrite in Cython).
special has a lot of functions that need improvements in precision. All
functions that are also implemented in mpmath can be tested against mpmath,
should match well.
Things not in mpmath:
- <Pauli checks> some others
This is a large module with by far the most open issues. It has improved a
over the past few releases, but more cleanup and rewriting of functions is
needed. The Statistics Review milestone on Github gives a reasonable
of which functions need checking, documentation and tests.
- skew/kurtosis of a number of distributions needs fixing
- fix generic docstring examples, they should be valid Python and make
for each distributions
- document subclassing of distributions even better, make issues with
of instances clear.
All hypothesis tests should get a keyword 'alternative' where applicable
``stats.kstest`` for an example).
``gaussian_kde`` is in good shape but limited. It should not be expanded
probably, this fits better in statsmodels (which already has a lot more KDE
``stats.mstats`` is a useful module for worked with data with missing
One problem it has though is that in many cases the functions have diverged
from their counterparts in `scipy.stats`. The ``mstats`` functions should
updated so that the two sets of functions are consistent.
This is the only module that was not ported to Python 3. Effectively it's
deprecated (not recommended to use for new code). In the future it should
removed from scipy (can be made into a separate module).
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-Dev