[SciPy-Dev] ANN: SciPy 0.10.0 beta 2

Christoph Gohlke cgohlke@uci....
Sun Sep 18 16:44:28 CDT 2011

On 9/18/2011 1:04 PM, Stéfan van der Walt wrote:
> On Sun, Sep 18, 2011 at 12:37 PM, Christoph Gohlke<cgohlke@uci.edu>  wrote:
>> 4) FAIL: test_datatypes.test_uint64_max
>> ----------------------------------------------------------------------
>> Traceback (most recent call last):
>>    File "X:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
>>      self.test(*self.arg)
>>    File
>> "X:\Python27\lib\site-packages\scipy\ndimage\tests\test_datatypes.py",
>> line 57, in test_uint64_max
>>      assert_true(x[1]>  (2**63))
>> AssertionError: False is not true
>> This is due to the 32 bit visual C compiler using signed int64 when
>> converting between uint64 to double.  Anyway, it seems unreliable to me
>> to cast back and forth between uint64 and double types for large numbers
>> because of potential overflow and precision loss.
> What would be the correct way to write this test?  It is simply meant
> to ensure that numbers close to the upper limit of 64-bit uints are
> preserved during interpolation operations.
> Regards
> Stéfan

Hi Stéfan,

I think the test is fine for that purpose. It fails with 32 bit msvc9 
because of a MS bug but it passes when building with the Intel or 64 bit 
msvc9 compilers.

The second part of my comment was more a reminder of the fact that for 
>>> np.uint64(np.float64(2**61) + 100.0) == np.uint64(2**61)

To me it does not look like that ndimage was designed to reliably 
interpolate integer numbers > 2**53. Maybe I am wrong. Is there a test 
for that?


More information about the SciPy-Dev mailing list