[Numpy-svn] r8175 - trunk/numpy/lib

numpy-svn@scip... numpy-svn@scip...
Sat Feb 20 12:16:06 CST 2010


Author: ptvirtan
Date: 2010-02-20 12:16:05 -0600 (Sat, 20 Feb 2010)
New Revision: 8175

Modified:
   trunk/numpy/lib/format.py
Log:
ENH: lib: write fortran-contiguous data to files using arr.T.tofile instead of arr.data (required for Py3 compatibility)

The issue is that when passing a buffer object to Python's
io.BufferedWriter.write, it will try to obtain the buffer using PyBUF_ND
| PyBUF_C_CONTIGUOUS. This will fail for strided arrays -- seems to be
an issue in Python, as it probably should try to obtain a SIMPLE buffer.

Modified: trunk/numpy/lib/format.py
===================================================================
--- trunk/numpy/lib/format.py	2010-02-20 18:15:51 UTC (rev 8174)
+++ trunk/numpy/lib/format.py	2010-02-20 18:16:05 UTC (rev 8175)
@@ -393,9 +393,10 @@
         # Instead, we will pickle it out with version 2 of the pickle protocol.
         cPickle.dump(array, fp, protocol=2)
     elif array.flags.f_contiguous and not array.flags.c_contiguous:
-        # Use a suboptimal, possibly memory-intensive, but correct way to
-        # handle Fortran-contiguous arrays.
-        fp.write(array.data)
+        if isfileobj(fp):
+            array.T.tofile(fp)
+        else:
+            fp.write(array.T.tostring('C'))
     else:
         if isfileobj(fp):
             array.tofile(fp)
@@ -447,7 +448,7 @@
             # This is not a real file. We have to read it the memory-intensive
             # way.
             # XXX: we can probably chunk this to avoid the memory hit.
-            data = fp.read(count * dtype.itemsize)
+            data = fp.read(int(count * dtype.itemsize))
             array = numpy.fromstring(data, dtype=dtype, count=count)
 
         if fortran_order:



More information about the Numpy-svn mailing list