[Numpy-discussion] insanely slow writing to memory mapped array
Mathew Yeates
myeates at jpl.nasa.gov
Wed Nov 29 19:52:40 CST 2006
whoa. I just found out that A=A.transpose() does nothing but change A's
flags from C_CONTIGUOUS to F_CONTIGUOUS!!
Okay, so heres the question ...... I am reading data into the columns of
a matrix. In order to speed this up, I want to read values into the rows
of a matrix and when I am all done, do a transpose. Whats the best way?
Mathew
Mathew Yeates wrote:
> Hmm
> I'm trying to duplicate the behavior with a simple program
> ---------
> import numpy
> datasize=5529000
> numrows=121
>
> fd=open("biggie","w")
> fd.close()
> big=numpy.memmap("biggie",mode="readwrite",
> shape=(numrows,datasize),dtype=numpy.float32)
>
> c=numpy.ones(shape=(datasize,),dtype=numpy.float32)
> for r in range(0,numrows):
> print r
> big[r,:] = c
> c[r] = 2.0
> ---------------------
> but it is fast. Hmmm. Any ideas about where to go from here?
> Mathew
>
>
>
> Robert Kern wrote:
>
>> Mathew Yeates wrote:
>>
>>
>>> Hi
>>>
>>> I have a line in my program that looks like
>>> outarr[1,:] = computed_array
>>> where outarr is a memory mapped file. This takes forever.
>>>
>>> I checked and copying the data using "cp" at the command line takes 1
>>> or 2 seconds. So the problem can't be attributed simply to disk i/o. Is
>>> it because the elements are being written one at a time? Any ideas on
>>> how to speed this up?
>>>
>>>
>> Memory-mapping is highly platform dependent. What platform are you on? What are
>> the sizes of the arrays? Can you write up a small, self-contained script that
>> demonstrates the issue so we can experiment and try things out on different
>> machines?
>>
>>
>>
>
>
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion at scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>
More information about the Numpy-discussion
mailing list