Mon Jan 14 10:58:50 CST 2008
> I'm sorry, I still think we're talking past each other. What do you mean by
> "native data type"? If you just want to get an ndarray without specifying a
> type, use PyArray_FROM_O(). That's what it's for. You don't need to know the
> data type beforehand.
What I have wanted in the past (and what I thought Neal was after) is a
way to choose which function to call according to the typecode of the
data as it is currently in memory. I don't want to convert (or cast or
even touch the data) but just call a type specific function instead. C++
templates can take some of the tedium out of that, but in some cases
algorithms may be different too. Guessing which sort algorithm to use
springs to mind.
Rather than saying "give me the right kind of array", I think there is
an interest in saying "choose which function is the best for this data".
PyArrayObject* array = PyArray_FROM_O( (PyObject*) O );
type = array -> descr -> type_num ;
case NPY_BYTE : signed_func(array);
case NPY_UBYTE : unsigned_func(array);
It sort of implies having a C++ type hierarchy for numpy arrays and
casting array to be a PyFloatArray or PyDoubleArray etc?
The extra confusion might be due to the way arrays can be laid out in
memory - indexing into array slices is not always obvious. Also if you
want to make sure your inner loop goes over the "fast" index you might
want an algorithm which reads the strides when it runs.
Sorry if I've only added to the confusion.
More information about the Numpy-discussion