[Numpy-discussion] Nasty bug using pre-initialized arrays

Timothy Hochberg tim.hochberg@ieee....
Fri Jan 4 17:31:53 CST 2008


On Jan 4, 2008 3:28 PM, Scott Ransom <sransom@nrao.edu> wrote:

> On Friday 04 January 2008 05:17:56 pm Stuart Brorson wrote:
> > >> I realize NumPy != Matlab, but I'd wager that most users would
> > >> think that this is the natural behavior......
> > >
> > > Well, that behavior won't happen. We won't mutate the dtype of the
> > > array because of assignment. Matlab has copy(-on-write) semantics
> > > for things like slices while we have view semantics. We can't
> > > safely do the reallocation of memory [1].
> >
> > That's fair enough.  But then I think NumPy should consistently
> > typecheck all assignmetns and throw an exception if the user attempts
> > an assignment which looses information.
> >
> > If you point me to a file where assignments are done (particularly
> > from array elements to array elements) I can see if I can figure out
> > how to fix it & then submit a patch.  But I won't promise anything!
> > My brain hurts already after analyzing this "feature".....   :-)
>
> There is a long history in numeric/numarray/numpy about this "feature".
> And for many of us, it really is a feature -- it prevents the automatic
> upcasting of arrays, which is very important if your arrays are huge
> (i.e. comparable in size to your system memory).
>
> For instance in astronomy, where very large 16-bit integer or 32-bit
> float images or data-cubes are common, if you upcast your 32-bit floats
> accidentally because you are doing double precision math (i.e. the
> default in Python) near them, that can cause the program to swap out or
> die horribly.  In fact, this exact example is one of the reasons why
> the Space Telescope people initially developed numarray.  numpy has
> kept that model.  I agree, though, that when using very mixed types
> (i.e. complex and ints, for example), the results can be confusing.
>


This isn't a very compelling argument in this case. The concern the numarray
people were addressing was the upcasting of precision. However, there are
two related hierarchies in numpy, one is the kind[1] of data, roughly: bool,
int, float, complex. Each kind has various precisions. The numarray folks
were concerned with avoiding upcasting of precision, not with avoiding
upcasting up kinds. And, I can't see much (any?) justification for allowing
automagic downcasting of kind, complex->float being the most egregious,
other than backwards compatibility. This is clearly an opportunity for
confusion and likely a magnet for bug. And, I've yet to see any useful
examples to support this behaviour. I imagine that their are some benifits,
but I doubt that they are compelling enough to justify the current
behaviour.




[1] I can't recall if this is the official terminology; I'm away from my
home computer at the moment and it's hard for me to check. The idea should
be clear however,



-- 
.  __
.   |-\
.
.  tim.hochberg@ieee.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20080104/48833da1/attachment.html 


More information about the Numpy-discussion mailing list