[Numpy-discussion] fast iteration (I think I've got it)

Neal Becker ndbecker2@gmail....
Tue Jan 1 15:22:15 CST 2008


Thank you for the response.  I'm afraid I haven't explained what I'm doing.

I have a lot of c++ code written in a generic c++ interface style.  (The
code is for signal processing, but that is irrelevant). 

The code is generic for data types as well as container types.

To accomplish this, the interface uses the boost::range interface concept. 
An example of using this interface:

template<typename in_t>
void F (in_t const& in) {
  typename boost::range_const_iterator<in_t>::type i = boost::begin (in);
  for (; i != boost::end (in); ++i)
    do_something_with (*i);
}

In the above, in_t is a container type.  It could be std::vector<int>, for
example.

The concept has:
  range_iterator<in_t>::type and range_const_iterator<in_t>::type are the
iterator types for the range
  begin(in) gives and iterator pointing to the beginning
  ++i increments the iterator
  *i derefs the iterator

This allows writing functions that work with different container types.  For
example, std::vector, boost::ublas::vector.

I'm trying to make this work with numpy.

To do this, I'm using boost::iterator_facade to create appropriate
iterators.  In the simplest case, this is just a wrapper around
PyArrayIterObject*.

Using this directly results in code equivalent to:
* uses PyArray_IterNew to create the iterator,
* PyArray_ITER_NEXT to increment the iterator,
* it->dataptr for deref

This will be slower than necessary.

So, I hope this explains the motivation behind the plan.  I'm hoping that I
can use PyArray_IterAllButAxis to iterate over arbitrary numpy arrays, but
with the inner loop using dptr += stride/sizeof(T) to speed the access.  I
believe that all numpy arrays have a constant stride over any one
dimension, which would allow this to work.



More information about the Numpy-discussion mailing list