[SciPy-dev] Ticket #467 pending decision for 3 months
Wed Feb 10 09:44:55 CST 2010
2010/2/10 Stéfan van der Walt <firstname.lastname@example.org>:
> Hi Robert
> On 10 February 2010 06:38, Robert Layton <email@example.com> wrote:
>> I submitted a fix for ticket #467 a while ago, which is quite a simple fix.
>> As scipy's mean and std functions are now passing through to numpy, there is
>> little reason to test them as part of scipy (the appropriate tests should be
>> in numpy).
>> Even if its decided that the tests should be retained, theres a patch for
>> that (r) as well.
> Sorry for not paying attention to this earlier. I think we should
> remove tests that only validate numpy's behaviour, so your
> `without_numpy.patch' looks good. Unfortunately, it doesn't apply
> cleanly; would you have a chance to look at it again?
this was sitting in my drafts folder since November
Sorry, for not replying earlier, I have seen your patches before but I
what I would prefer.
I agree that numpy functions should be tested in numpy. On the other hand,
the stats tests already include additional test matrices, that can be used
to check the precision of the numpy functions. And I would like to
(As an example, numpy random is mostly tested in scipy.stats since there
pdf, pmf and cdf of the distributions are available.)
The point for stats is that I didn't find any precision test in the
numpy test suite for mean, var and so on.
When I wrote the anova tests using the NIST reference cases,
numpy.mean did pretty badly for the badly scaled test cases. I never
checked the NIST test cases specifically for mean, var, ...
I still don't know where precision tests should be, but the outcome
would be very useful. In ANOVA I ended up calculating the mean twice
(if the dataset is badly scaled) to pass the NIST test.
(google is not very informative about donkeys and piles of hay)
> SciPy-Dev mailing list
More information about the SciPy-Dev