[Numpy-discussion] import 16-bit tiff - byte-order problem?
Thu Nov 6 21:54:32 CST 2008
I'm trying to import a 16-bit tiff image into a numpy array. I have
found, using google, suggestions to do the following:
After starting with:
i = Image.open('16bitGreyscaleImage.tif')
Stéfan van der Walt suggested:
a = np.array(i.getdata()).reshape(i.size) # a 1d numpy array
and adapted from Nadav Horesh's suggestion:
a = np.fromstring(i.tostring(), dtype=np.uint16).reshape(256, 256)
Both give me the same answer as:
a = np.array(i, dtype=np.uint16)
In all cases it looks like the resulting byte order is wrong: pixels
with 0 values correctly are 0 in a, in the correct places, but all
non-zero values are wrong compared to the same image opened in ImageJ
(in which the image looks correct).
What's the conversion magic I need to invoke to correctly intepret
this image type?
Post-doctoral research fellow
Neurobiology, University of Pittsburgh
More information about the Numpy-discussion