[SciPy-user] unittests for scipy.stats: assert_almost_equal question

eric eric at scipy.org
Sun Jan 20 04:36:31 CST 2002


Hey Louis,

> I'm writting unittests for the scipy.stats framework (partly to
> teach myself unit testing, partly to teach myself the stats
> functions).  I'm writing them using Wilkinson's Statistic Quiz
> (ref. in code attached) for data and comparison numbers.

A thousand blessing upon you.  I immediately commited it to the CVS!

>
> Right now, I get 2 failures and 2 errors (using Numeric arrays).  I
> think the complete framework needs another set of tests for the
> same data in lists form (if I understand the scipy.stats module
> correctly)

Just test the Numeric versions.  The list versions are going away entirely.
In fact, the interface to the stats functions is due for a *major* overhaul.
There is a partially converted version called new_stats.py in the CVS that
is destined to replace stats.py, but it needs a ton of work.  Notes on the
changes needed are here.

    http://www.scipy.org/Members/eric/stats_notes.html

and the current incarnation of the changes are here:


http://scipy.net/cgi-bin/viewcvs.cgi/scipy/stats/new_stats.py?rev=1.6&conten
t-type=text/vnd.viewcvs-markup

This is a project in need of a champion, and anyone who wants to pick it up
will get another thousand blessings.

Also, this is definitely *not* to discourage you from making unit tests for
the current version.  If the tests are there, we can convert them to new
interfaces later.

>
> Of the failures/errors, some of them may be in the code, but one of
> them maybe something I'm missing in the unit testing framework.
> For one of the tests, I get the following error:
>
> ======================================================================
> FAIL: check_stdHUGE (__main__.test_basicstats)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File "C:\Documents and Settings\kll560\My
> Documents\testdata\test_stats.py", line 150, in check_stdHUGE
>     assert_almost_equal(y,2.738612788e+12)
>   File "C:\apps\Python\scipy\scipy_test.py", line 223, in
> assert_almost_equal
>     assert desired == actual, msg
> AssertionError:
> Items are not equal:
> DESIRED: 2.738612788e+012
> ACTUAL: 2.73861278753e+012
> ----------------------------------------------------------------------
>
> The assert_almost_equal function is being called using the default
> setting of decimal=7   Since here it seems I have 10 digits of
> accuracy, why did this fail?

7 decimals is not 7 significant digits but actually 7 decimal places to the
left of 0.  The way almost_eqaul works is by using the following comparison:

    round(abs(desired - actual),decimal) == 0

In this case, abs(desired-actual) is 470. rounding it to the 7 digits past
the decimal leaves it as 470 which fails the test of being equal to 0.
Maybe a test function needs to be added that tests to a specified number of
significant digits for cases like these.

Thanks tons for working on this,

eric






More information about the SciPy-user mailing list