[Numpy-discussion] change made to test_print.py

David Cournapeau cournape@gmail....
Thu Jan 8 14:26:17 CST 2009


On Fri, Jan 9, 2009 at 5:11 AM, Christopher Hanley <chanley@stsci.edu> wrote:
> David Cournapeau wrote:
>> On Fri, Jan 9, 2009 at 4:29 AM, Christopher Hanley <chanley@stsci.edu> wrote:
>>> David Cournapeau wrote:
>>>> On Fri, Jan 9, 2009 at 1:37 AM, Christopher Hanley <chanley@stsci.edu> wrote:
>>>>> Hi,
>>>>>
>>>>> I've committed the following change to test_print.py to fix one of the
>>>>> tests.
>>>>>
>>>> Hi Christopher,
>>>>
>>>> Please do not modify those tests - they are supposed to fail,
>>>>
>>>> David
>>>> _______________________________________________
>>>> Numpy-discussion mailing list
>>>> Numpy-discussion@scipy.org
>>>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>>> Hi David,
>>>
>>> Sorry.  Should these tests be generating a "known failures" then?
>>
>> No. The problem are known, and are being fixed (in a branch). Since
>> the problem is only in the development trunk, I don't see any problem
>> with having failures for some time,
>>
>> David
>> _______________________________________________
>> Numpy-discussion mailing list
>> Numpy-discussion@scipy.org
>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
> I would disagree.  If you were to attempt the following:
>
> n = numpy.test()
> n.wasSuccessful()
>
> You expect the result to be 'True'.  If not it is necessary to find out
> why.  Right now the following occurs:
>
>  >>> n.wasSuccessful()
> False
>
> I have no way of knowing that you wanted those tests to fail unless you
> have them marked as KNOWNFAIL. Since we use numpy in our production
> systems I need to determine why numpy is failing.  We track the changes
> on the trunk because we need to know how changes will effect our code
> prior to our customers downloading the latest numpy release.

I don't understand: you can't expect the trunk to always work. We try
not to break it - but sometimes it does not work.

Personally, I don't like knownfailure much anyway: I feel like it is
too easy to tag one test known failure, and then nobody cares about it
anymore. Those formatting problems were already problems before - the
tests only show the problem, it does not cause the problem, so I don't
understand why it is so important: a 100 % running test suite with a
problem which is not shown or a 95 % running test suite with the
problem is the same thing; the code in numpy itself is exactly the
same.

David


More information about the Numpy-discussion mailing list