[SciPy-User] Help optimizing an algorithm

Chris Weisiger cweisiger@msg.ucsf....
Mon Feb 4 10:23:49 CST 2013


On Fri, Feb 1, 2013 at 5:31 PM, Zachary Pincus <zachary.pincus@yale.edu>wrote:

>
> Note that you'll run into trouble if you have image pixels that are above
> or below the per-pixel min/max range. With the map_coordinates() 'mode'
> parameter set to 'constant' and cval=-1 as you have, this will break
> spectacularly if by chance a pixel winds up darker than in the
> zero-exposure calibration image... yet this will happen occasionally, since
> there's a statistical distribution of noise in the pixel readout.
>
>
My current plan is to linearly extrapolate my low-end datapoints (which are
near the baseline of 100) to 0; thus it should be impossible to go off the
low end. The rare 99 output from the camera would still map to a valid
"exposure time".

Going off the high end is certainly still possible, but I can recognize
those because they'll use the cval of -1 (or NaN, as you say) and handle
them specially -- most likely by again using linear extrapolation of the
last few points where I do have data.

Thanks for looking over the code. Time to upscale it to deal with real
data...

-Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20130204/72806757/attachment.html 


More information about the SciPy-User mailing list