[SciPy-User] Help optimizing an algorithm
Mon Feb 4 10:23:49 CST 2013
On Fri, Feb 1, 2013 at 5:31 PM, Zachary Pincus <firstname.lastname@example.org>wrote:
> Note that you'll run into trouble if you have image pixels that are above
> or below the per-pixel min/max range. With the map_coordinates() 'mode'
> parameter set to 'constant' and cval=-1 as you have, this will break
> spectacularly if by chance a pixel winds up darker than in the
> zero-exposure calibration image... yet this will happen occasionally, since
> there's a statistical distribution of noise in the pixel readout.
My current plan is to linearly extrapolate my low-end datapoints (which are
near the baseline of 100) to 0; thus it should be impossible to go off the
low end. The rare 99 output from the camera would still map to a valid
Going off the high end is certainly still possible, but I can recognize
those because they'll use the cval of -1 (or NaN, as you say) and handle
them specially -- most likely by again using linear extrapolation of the
last few points where I do have data.
Thanks for looking over the code. Time to upscale it to deal with real
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User