[SciPy-User] Return type of scipy.interpolate.splev for input array of length 1
Wed Jan 20 15:18:39 CST 2010
On Wed, Jan 20, 2010 at 3:00 PM, Anne Archibald
> 2010/1/19 Pauli Virtanen <email@example.com>:
>> Mon, 18 Jan 2010 10:59:46 -0500, josef.pktd wrote:
>>> On Sun, Jan 17, 2010 at 5:25 AM, Yves Frederix <firstname.lastname@example.org>
>>>> It was rather unexpected that the type of input and output data are
>>>> different. After checking interpolate/fitpack.py it seems that this
>>>> behavior results from the fact that the length-1 case is explicitly
>>>> treated differently (probably to be able to deal with the case of
>>>> scalar input, for which scalar output is expected):
>>>> 434 def splev(x,tck,der=0):
>>>> 487 if ier: raise TypeError,"An error occurred" 488
>>>> if len(y)>1: return y 489 return y
>>>> Wouldn't it be less confusing to have the return value always have the
>>>> same type as the input data?
>>> I don't know of any "official" policy.
>> I think (unstructured) interpolation should respect
>> input.shape == output.shape
>> also for 0-d. So yes, it's a wart, IMHO.
>> Another question is: how many people actually have code that depends on
>> this wart, and can it be fixed? I'd guess there's not much problem: (1,)
>> arrays function nicely as scalars, but not vice versa because of
> More generally, I think many functions should preserve the shape of
> the input array. Unfortunately it's often a hassle to do this: a few
> functions I have written start by checking whether the input is a
> scalar, setting a boolean and converting it to an array of size one;
> then at the end, I check the boolean and strip the array wrapping if
> the input is a scalar. It's annoying boilerplate, and I suspect that
> many functions don't handle this just because it's a nuisance. Some
> handy utility code might help.
> It would also be good to have a generic test one could apply to many
> functions to check that they preserve array shapes (0-d, 1-d of size
> 1, many-dimensional, many-dimensional with a zero dimension), and
> scalarness. Together with a test for preservation of arbitrary array
> subclasses (and correct functioning when handed matrices), one might
> be able to shake out a lot of minor easy-to-fix nuisances.
> SciPy-User mailing list
I just checked again, the conversion in the distribution is weaker
if output.ndim == 0:
as a result:
I just followed the pattern of Travis in this.
Handling and preserving array subclasses is a lot of work and
increases the size of simple functions considerably and triples (? not
checked) the number of required tests (I just tried with stats.gmean,
hmean and zscore). I don't see a way to write generic tests that would
work across different signatures and argument types.
More information about the SciPy-User