[Numpy-discussion] Optimized half-sizing of images?
Thu Aug 6 15:16:25 CDT 2009
(second try on this message. the fist time I included a test PNG that
made it too large)
We have a need to to generate half-size version of RGB images as quickly
as possible. PIL does a pretty good job, but it dawned on me that in the
special case of a half-size, one might be able to do it faster with
numpy, simply averaging the four pixels in the larger image to create
one in the small. I'm doing tiling, and thus reducing 512x512 images to
256x256, so I imagine I'm making good use of cache (it does get pretty
pokey with really large images!)
What I have now is essentially this :
# a is a (h, w, 3) RGB array
a2 = a[0::2, 0::2, :].astype(np.uint16)
a2 += a[0::2, 1::2, :]
a2 += a[1::2, 0::2, :]
a2 += a[1::2, 1::2, :]
a2 /= 4
time: 67.2 ms per loop
I can speed it up a bit if I accumulate in a uint8 and divide as I go to
a2 = a[0::2, 0::2, :].astype(np.uint8) / 4
a2 += a[0::2, 1::2, :] / 4
a2 += a[1::2, 0::2, :] / 4
a2 += a[1::2, 1::2, :] / 4
time: 46.6 ms per loop
That does lose a touch of accuracy, I suppose, but nothing I can see.
Method 1 is about twice as slow as PIL's bilinear scaling.
Can I do better? It seems it should be faster if I can avoid so many
separate loops through the array. I figure there may be some way with
filter or convolve or ndimage, but they all seem to return an array the
(Cython is another option, of course)
Test code enclosed.
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 3526 bytes
Desc: not available
Url : http://mail.scipy.org/pipermail/numpy-discussion/attachments/20090806/439e92b5/attachment.bin
More information about the NumPy-Discussion