[Numpy-discussion] accuracy issues with numpy arrays?
eli bressert
bressert@gmail....
Tue Apr 29 17:00:21 CDT 2008
Hi,
I'm writing a quick script to import a fits (astronomy) image that has
very low values for each pixel. Mostly on the order of 10^-9. I have
written a python script that attempts to take low values and put them
in integer format. I basically do this by taking the mean of the 1000
lowest pixel values, excluding zeros, and dividing the rest of the
image by that mean. Unfortunately, when I try this in practice, *all*
of the values in the image are being treated as zeros. But, if I use a
scipy.ndimage function, I get proper values. For example, I take the
pixel that I know has the highest value and do
x = scipy.ndimage.maximum(image)
print x
1.7400700016878545e-05
The script is below. Thanks for the help.
Eli
import pyfits as p
import scipy as s
import scipy.ndimage as nd
import numpy as n
def flux2int(name):
d = p.getdata(name)
x,y = n.shape(d)
l = x*y
arr1 = n.array(d.reshape(x*y,1))
temp = n.unique(arr1[0]) # This is where the bug starts. All values
are treated as zeros. Hence only one value remains, zero.
arr1 = arr1.sort()
arr1 = n.array(arr1)
arr1 = n.array(arr1[s.where(arr1 >= temp)])
val = n.mean(arr1[0:1000])
d = d*(1.0/val)
d = d.round()
p.writeto(name[0,]+'fixed.fits',d,h)
More information about the Numpy-discussion
mailing list