[Numpy-discussion] numpy.ndarrays as C++ arrays (wrapped with boost)
Wed Sep 12 13:10:39 CDT 2007
> less than what? std:valarray, etc. all help with this.
I do not agree with this statement. A correct memory managed array would
increment and decrement a reference counter somewhere.
Yes, it sure would be nice to build it on an existing code base, and
> boost::multiarray seems to fit.
The problem with multiarray is that the dimension of the array are fixed at
the compilation. Although one could use 1 for the size in the remaning
dimension, I don't think it's the best choice, given that a real
dynamic-dimension array is not much complicated than a static-dimension one,
perhaps a little more slower.
boost:multiarray does not seem to take this approach. Rather it has two
> classes: a multi_array: responsible for its own data block, and a
> multi_array_ref: which uses a view on another multiarray's data block.
> This is getting close, but it means that when you create a
> multi_array_ref, the original multi_array needs to stay around. I'd
> rather have much more flexible system,where you could create an array,
> create a view of that array, then destroy the original, then have the
> data block go away when you destroy the view. This could cause little
> complications if you started with a huge array, made a view into a tiny
> piece of it, then the whole data block would stick around -- but that
> would be up to the user to think about.
I don't know ho numpy does it either, but a view on an view of an array may
be a view on an array, so in C++, an view should only reference the data,
not the real view, so when the array is destroyed, the view is still
correct, as it has a reference on the data and not the original array.
hm. that could work (as far as my limited C++ knowledge tells me),b ut
> it's still static at run time -- which may be OK -- and is C++-is anyway.
I've done this before, with type traits and a multi-dispatch method, you can
instantiate several functions with the correct type. It's a classic approach
that is used in plugins, and it does not use RTTI, and it is compatible
across C++ compilers.
Can the python gurus here comment on how possible that is?
Once you have the Python object, increment the reference counter when you
wrap the data in C++ for a real array or for a view, and decrement it in the
destructor of your C++ object, is that what you mean ?
If the C++ object can directly use a PyObject, it's very simple to use. It
perhaps could be done by a policy class, so that temporary C++ object would
use a default policy that does not rely on a Python object.
I"m not so much worried about the overhead as the dependency -- to use
> your words, it would feel perverse to by including python.h for a
> program that wasn't using python at all.
This is solved if one can use policy classes.
This functionality seems to be missing from many (moat) of these C++
> containers. I suspect that it's the memory management issue. One of the
> points of these containers it to take care of memory management for you
> -- if you pass in a pointer to an existing data block -- it's not
> managing your memory any more.
What Albert did for hs wrapper is this : provide an adaptator that can use
the data pointer. It's only a policy (but not the default one).
Full Disclosure: I have neither the skills nor the time to actually
> implement any of these ideas. If no one else does, then I guess we're
> just blabbing -- not that there is anything wrong with blabbing!
I know that in my lab, we intend to wrap numpy arrays in a C++ multi array,
but not the boost one. It will be for array that have more than 3
dimensions, and for less than 2 dimensions, we will use our own matrix
library, as it is "simple" to wrap array with it. The most complicated thing
will be the automatic conversion. It will most likely be Open Source (GPL),
but I don't know when we will be able to have time to do it and then when we
will make it available...
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Numpy-discussion