[SciPy-User] Help optimizing an algorithm
Fri Feb 1 19:31:53 CST 2013
> I often have trouble wrapping my head around vectorized problems; I'm much more of a software engineer than a mathematician so this is a difficult area for me. Incidentally, I really appreciate your help! I understood the rest of the explanation, I'm pretty sure, and mocked up this vectorized version that appears to function properly:
> I'd appreciate a more experienced (and, I suspect, more mentally awake!) look-over. And thanks again for your assistance!
This looks reasonable enough.
Note that you'll run into trouble if you have image pixels that are above or below the per-pixel min/max range. With the map_coordinates() 'mode' parameter set to 'constant' and cval=-1 as you have, this will break spectacularly if by chance a pixel winds up darker than in the zero-exposure calibration image... yet this will happen occasionally, since there's a statistical distribution of noise in the pixel readout.
Similarly, your calibration images probably don't (and definitely shouldn't!) contain saturated pixels, so you'll need to think about what to do when you get a pixel with a value in the region between that of the longest-exposure calibration image and 2**16-1.
Probably to deal with the low-end case, you should use mode="nearest", so that randomly darker pixels still get assigned a zero-second exposure time, as opposed to -1 as currently. For the high-end case, you need to make sure that the index calculated isn't larger than the array, and if so, trigger an error condition. (Alternately, you could add an extra image to the stack with NAN values, so that any offending pixels get set to NAN to signal local error, but the rest of the image is still usable.)
More information about the SciPy-User