Sun Jan 13 17:14:29 CST 2008
numpy frequently refers to 'casting'. I'm not sure if that term is ever
defined. I believe it has the same meaning as in C. In that case, it is
unfortunately used to mean 2 different things. There are casts that do not
change the underlying bits (such as a pointer cast), and there are casts
that actually convert to different bits (such as float -> double).
I think numpy means the latter. When an array where the underlying data is
one type, a cast to another type means actually reallocating and converting
It often occurs that I have an algorithm that can take any integral type,
because it is written with c++ templates. In that case, I don't want to
use PyArray_FROMANY, because I don't want to unecessarily convert the array
data. Instead, I'd like to inquire what is the preferred type of the data.
The solution I'm exploring is to use a function I
call 'preferred_array_type'. This uses the __array_struct__ interface to
find the native data type. I chose to use this interface, because then it
will work with both numpy arrays and other array-like types.
Any thoughts on all of this? Particularly, my observation that numpy seems
to want me to tell it what data type of array my algorithm wants, but
doesn't seem to provide a good mechanism to allow me to inquire what is the
native type to avoid conversion.
More information about the Numpy-discussion