[Numpy-discussion] Making numpy sensible: backward compatibility please
Mon Oct 1 14:30:24 CDT 2012
On Fri, Sep 28, 2012 at 3:11 PM, Charles R Harris
> If the behaviour is not specified and tested, there is no guarantee that it
> will continue.
This is an open-source project - there is no guarantee of ANYTHING.
But that being said, the specification and testing of numpy is quite
weak -- we have no choice but to use unspecified and untested (and how
do we even know what is tested?) features.
So, be definition, the current code base IS the specification -- if it
changes that is a change in the spec.
Travis is right -- we users are the test cases -- which is better than
most code, frankly.
Perhaps more effort could be put into considering the impact that
internal changes have, and announcing them more aggressively? So we in
the user community have a better idea what to look for when
adopting/testing a new release (candidate).
Another thought -- perhaps we're being too incremental, and a py3k
type change is in order -- do a while bunch at once. Though, as we've
seen with py3k, that would mean maintaining two version for a good
Also -- last time I tried adding some tests, I found it to be very
hard to figure out where and how to put them in. Granted, I only poked
at it for an hour or two -- but if users could add tests in less than
an hour, we'd get a lot more tests. Maybe it just need more
documentation, or maybe I just didn't know nose at the time -- I don't
remember, but I do know I was discouraged, and haven't tried since.
Maybe a test change log of some sort would be helpful: easy access to
know what tests had to be changed/added to accommodate change behavior
(i.e any test that is passing with version i that would not have
passed version i-1) - looking over that might be a good way to figure
out if your own code is likely to be affected.
>> I think that this is a cultural issue: priority is not given to stability
>> and backward compatibility. I think that this culture is very much
>> ingrained in the Python world, that likes iteratively cleaning its
>> software design. For instance, I have the feeling that in the
>> scikit-learn, we probably fall in the same trap. That said, such a
>> behavior cannot fare well for a base scientific environment. People tell
>> me that if they take old matlab code, the odds that it will still works
>> is much higher than with Python code. As a geek, I tend to reply that we
>> get a lot out of this mobility, because we accumulate less cruft.
>> However, in research settings, for reproducibility reasons, ones need to
>> be able to pick up an old codebase and trust its results without knowing
>> its intricacies.
> Bitch, bitch, bitch. Look, I know you are pissed and venting a bit, but this
> problem could have been detected and reported 6 months ago, that is, unless
> it is new due to development on your end.
>> >From a practical standpoint, I believe that people implementing large
>> changes to the numpy codebase, or any other core scipy package, should
>> think really hard about their impact. I do realise that the changes are
>> discussed on the mailing lists, but there is a lot of activity to follow
>> and I don't believe that it is possible for many of us to monitor the
>> discussions. Also, putting more emphasis on backward compatibility is
>> possible. For instance, the 'order' parameter added to np.copy could have
>> defaulted to the old behavior, 'K', for a year, with a
>> DeprecationWarning, same thing for the casting rules.
>> Thank you for reading this long email. I don't mean it to be a complaint
>> about the past, but more a suggestion on something to keep in mind when
>> making changes to core projects.
> NumPy-Discussion mailing list
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
More information about the NumPy-Discussion