[Numpy-discussion] Nasty bug using pre-initialized arrays
Mon Jan 7 13:08:19 CST 2008
On 07/01/2008, Charles R Harris <firstname.lastname@example.org> wrote:
> One place where Numpy differs from MatLab is the way memory is handled.
> MatLab is always generating new arrays, so for efficiency it is worth
> preallocating arrays and then filling in the parts. This is not the case in
> Numpy where lists can be used for things that grow and subarrays are views.
> Consequently, preallocating arrays in Numpy should be rare and used when
> either the values have to be generated explicitly, which is what you see
> when using the indexes in your first example. As to assignment between
> arrays, it is a mixed question. The problem again is memory usage. For large
> arrays, it makes since to do automatic conversions, as is also the case in
> functions taking output arrays, because the typecast can be pushed down into
> C where it is time and space efficient, whereas explicitly converting the
> array uses up temporary space. However, I can imagine an explicit typecast
> function, something like
> a[...] = typecast(b)
> that would replace the current behavior. I think the typecast function could
> be implemented by returning a view of b with a castable flag set to true,
> that should supply enough information for the assignment operator to do its
> job. This might be a good addition for Numpy 1.1.
This is introducing a fairly complex mechanism to cover, as far as I
can see, two cases: conversion of complex to real, and conversion of
float to integer. Conversion of complex to real can already be done
explicitly without a temporary:
a[...] = b.real
That leaves only conversion of float to integer.
Does this single case cause enough confusion to warrant an exception
and a way around it?
More information about the Numpy-discussion