[Numpy-discussion] Optimized half-sizing of images?
Thu Aug 6 20:46:03 CDT 2009
> We have a need to to generate half-size version of RGB images as
> as possible.
How good do these need to look? You could just throw away every other
pixel... image[::2, ::2].
Failing that, you could also try using ndimage's convolve routines to
run a 2x2 box filter over the image, and then throw away half of the
pixels. But this would be slower than optimal, because the kernel
would be convolved over every pixel, not just the ones you intend to
Really though, I'd just bite the bullet and write a C extension (or
cython, whatever, an extension to work for a defined-dimensionality,
defined-dtype array is pretty simple), or as suggested before, do it
on the GPU. (Though I find that readback from the GPU can be slow
enough that C code can beat it in some cases.)
More information about the NumPy-Discussion