[Numpy-discussion] Optimized half-sizing of images?
Fri Aug 7 11:28:53 CDT 2009
Zachary Pincus wrote:
>> We have a need to to generate half-size version of RGB images as
>> as possible.
> How good do these need to look? You could just throw away every other
> pixel... image[::2, ::2].
I do the as good quality as I can get. throwing away pixels gets a bit ugly.
> Failing that, you could also try using ndimage's convolve routines to
> run a 2x2 box filter over the image, and then throw away half of the
> pixels. But this would be slower than optimal, because the kernel
> would be convolved over every pixel, not just the ones you intend to
yup -- worth a try though.
> Really though, I'd just bite the bullet and write a C extension (or
> cython, whatever, an extension to work for a defined-dimensionality,
> defined-dtype array is pretty simple),
I was going to sit down and do that this morning, but...
> or as suggested before, do it
> on the GPU.
I have no idea how to do that, except maybe pyOpenGL, which is on our
list to try.
Sebastian Haase wrote:
> regarding your concerns of doing to fancy interpolation at the cost of
> speed, I would guess the overall bottle neck is rather the memory
> access than the extra CPU cycles needed for interpolation.
well, could be, though I can't really know 'till I try. One example,
though is using ndimage.zoom -- order 1 interpolation is MUCH faster
than order 2 or 3.
> Regarding ndimage.zoom it should be able to "not zoom" the color-axis
> but the others in one call.
well, that's what I thought, but I can't figure out how to do it. The
docs are a bit sparse. Here's my offer:
If someone tells me how to do it, I'll make a docs contribution to the
SciPy docs explaining it.
> You say that as if it's painful to do so :)
wow! Thanks for doing my work for me. I thought this would be a good
case to give Cython a try for the first time -- having a working example
> sage: timeit("halfsize_cython(a)")
> 625 loops, best of 3: 604 µs per loop
> sage: timeit("halfsize_slicing(a)")
> 5 loops, best of 3: 2.72 ms per loop
and bingo! a 4.5 times speed-up -- I think that's enough to see in our app.
> I was about to say the same thing, it's probably the memory, not
> cycles, that's hurting you.
sure, but the slicing method pushes that memory around more than it
> Of course 512x512 is still small enough
> to fit in L2 of any modern computer.
I think so -- I do know that the slicing method slows down a lot with
larger images. We're tiling anyway in this case, but if I did want to do
a big image, I'd probably break it down into chunks to process it anyway.
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
More information about the NumPy-Discussion