[SciPy-User] Help optimizing an algorithm
Fri Feb 1 16:07:59 CST 2013
On Thu, Jan 31, 2013 at 4:00 PM, Zachary Pincus <email@example.com>wrote:
> Let's go back a few steps to make sure we're on the same page... You have
> a series of flat-field images acquired at different exposure times, which
> together define a per-pixel gain function, right? Then for each new image
> you want to calculate the "effective exposure time" for the count at a
> given pixel. Which is to say, the light input. Is this all correct?
> So for each pixel, you are estimating the gain function f(exposure) ->
> value from your series of flat-field calibration images.
> Because it's monotonic, you can invert this to g(value) -> exposure.
> Then for any given value in an input image, you want to apply function g().
> Again, is this all correct?
Yes, this is all correct.
Instead let's resample the exposures and values to be uniform:
> num_samples = 10
> vmin, vmax = values.min(), values.max()
> uniform_values = numpy.linspace(vmin, vmax, num_samples)
> uniform_exposures = numpy.interp(uniform_values, values, exposures)
I think this is what I was missing: it's the function values that need to
be uniformly-spaced, not the exposure times used to collect those values.
Which in hindsight makes sense.
I often have trouble wrapping my head around vectorized problems; I'm much
more of a software engineer than a mathematician so this is a difficult
area for me. Incidentally, I really appreciate your help! I understood the
rest of the explanation, I'm pretty sure, and mocked up this vectorized
version that appears to function properly:
I'd appreciate a more experienced (and, I suspect, more mentally awake!)
look-over. And thanks again for your assistance!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User