[Numpy-discussion] type conversion question

Benjamin Root ben.root@ou....
Thu Apr 18 20:02:59 CDT 2013


On Thu, Apr 18, 2013 at 7:31 PM, K.-Michael Aye <kmichael.aye@gmail.com>wrote:

> I don't understand why sometimes a direct assignment of a new dtype is
> possible (but messes up the values), and why at other times a seemingly
> harmless upcast (in my potentially ignorant point of view) is not
> possible.
> So, maybe a direct assignment of a new dtype is actually never a good
> idea? (I'm asking), and one should always go the route of newarray=
> array(oldarray, dtype=newdtype), but why then sometimes the upcast
> provides an error and forbids it and sometimes not?
>
>
> Examples:
>
> In [140]: slope.read_center_window()
>
> In [141]: slope.data.dtype
> Out[141]: dtype('float32')
>
> In [142]: slope.data[1,1]
> Out[142]: 10.044398
>
> In [143]: val = slope.data[1,1]
>
> In [144]: slope.data.dtype='float64'
>
> In [145]: slope.data[1,1]
> Out[145]: 586.98938070189865
>
> #-----
> #Here, the value of data[1,1] has completely changed (and so has the
> rest of the array), and no error was given.
> # But then...
> #----
>
> In [146]: val.dtype
> Out[146]: dtype('float32')
>
> In [147]: val
> Out[147]: 10.044398
>
> In [148]: val.dtype='float64'
> ---------------------------------------------------------------------------
> AttributeError                            Traceback (most recent call last)
> <ipython-input-148-52a373a41cac> in <module>()
> ----> 1 val.dtype='float64'
>
> AttributeError: attribute 'dtype' of 'numpy.generic' objects is not
> writable
>
> === end of code
>
> So why is there an error in the 2nd case, but no error in the first
> case? Is there a logic to it?
>
>
When you change a dtype like that in the first one, you aren't really
upcasting anything.  You are changing how numpy interprets the underlying
bits.  Because you went from a 32-bit element size to a 64-bit element
size, you are actually seeing the double-precision representation of 2 of
your original data points together.

The correct way to cast is to do something like "a =
slope.data.astype('float64')".  That makes a copy and does the casting as
safely as possible.

As for the second one, you have what is called a numpy scalar.  These
aren't quite the same thing as a numpy array, and can be a bit more
restrictive.  Can you imagine what sort of issues that would pose if one
could start viewing and modifying neighboring chunks of memory without ever
having to mess around with pointers?  It would be a hacker's dream!

I hope that clears things up.
Ben Root
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20130418/7246a2f9/attachment.html 


More information about the NumPy-Discussion mailing list