[Numpy-discussion] numpy.ndarrays as C++ arrays (wrapped with boost)
Tue Sep 11 11:41:17 CDT 2007
I've sent pretty much the same email to c++sig, but I thought I'd also try my
luck here, especially since I just saw a closely related question posted one
week ago here (albeit mostly from a swig context).
I'm working working on an existing scientific code base that's mostly C++ and
I'm currently interfacing it to python with boost.python with a view to doing
non-performance critical things in python. The code currently mostly just uses
plain C double arrays passed around by pointers and I'd like to encapsulate
this at least with something like stl::vector (or maybe valarray), but I've
been wondering whether it might not make sense to use (slightly wrapped) numpy
ndarrays -- since I eventually plan to make fairly heavy use of existing
python infrastructure like matplotlib and scipy where possible. Also, ndarrays
provide fairly rich functionality even at the C-API-level and I'm pretty
familiar with numpy. Furthermore I can't find something equivalent to numpy
for C++ -- there's ublas as well as several other matrix libs and a couple of
array ones (like blitz++), but there doesn't seem to be one obvious choice, as
there is for python. I think I will mostly use double arrays of fairly large
size, so having really low overhead operations on small arrays with more or
less exotic types is not important to me.
Things that would eventually come in handy, although they're not needed yet,
are basic linear algebra and maybe two or three LAPACK-level functions (I can
think of cholesky decomposition and SVD right now) as well as possibly
wavelets (DWT). I think I could get all these things (and more) from scipy
(and kin) with too much fuzz (although I haven't tried wavelet support yet)
and it seems like picking together the same functionality from different C++
libs would require considerably more work.
So my question is: might it make sense to use (a slightly wrapped)
numpy.ndarray, and if so is some code already floating around for that (on
first glance it seems like there's a bit of support for the obsolete Numeric
package in boost, but none for the newer numpy that supercedes it); if not is
my impression correct that making the existing code numpy compatible shouldn't
be too much work.
Provided this route doesn't make much sense, I'd also be curious what people
would recommend doing instead.
In last week's thread mentioned above I found the following link which looks
pretty relevant, albeit essentially undocumented and possibly pre-alpha -- has
anyone here tried it out?
More information about the Numpy-discussion