Importing 16-bit unsigned raw (image) data into python is simple:
data = numpy.fromfile( source_file, dtype=numpy.uint16 )
data = numpy.asarray( data )
But I must also import 10, 12, and 14-bit unsigned integer data so I can encode those data into a proprietary image format. The format simply accepts an array of 16 or 32-bit integers with a unique header. Numpy has no implementation for 10, 12, and 14-bit data types.
When I open 10, 12, or 14-bit RAW in ImageJ as a 16-bit unsigned image (this is the smallest available unsigned bit depth for ImageJ), it looks funny, but the pixels are all in the correct location. This is the wrong bit depth, so I'm not surprised.
However, if I open the RAW in ImageJ with little endian byte order, the 10-bit image displays perfectly as a 16-bit unsigned image inside ImageJ. So I try this in Python:
image_bitness = numpy.uint16
image_bitness = numpy.dtype( image_bitness ).newbyteorder( '<' )
data = numpy.fromfile( source_file, dtype=image_bitness )
data = numpy.asarray( data )
Swapping the byte order to little endian has no effect in python. The image still looks funny when I conduct the image conversion in python into the new format, then open this new image in ImageJ.
I have implemented this same file conversion for 16 and 32-bit unsigned RAW and it works perfectly.
So to summarize:
An actual work-around for 10, 12, or 14-bit unsigned data types is the issue
Not sure how best to convert from 10 to 16-bit generally, especially if I must assume 16-bit to call numpy.fromfile( source_file, dtype=numpy.uint16 )
to get an array of data initially (see Appendix)
Not sure why ImageJ little endian displays the image correctly. Viewing a little endian image as big endian should probably scramble the image entirely. Maybe I'm mistaken.
Not sure why swapping byte order in Python does not have the same effect as ImageJ
Example 10-bit images
Incorrectly displayed pixels without "little endian" in ImageJ. Images converted in Python into new file format look like this: Correctly displayed pixels with "little endian" in ImageJ:
Appendix: Padding base-2 binary to get 16-bits
I thought I could pad out my base-2 binary number with 0s, then convert back to base-10 integer. I tried this:
data = numpy.fromfile( source_file, dtype=numpy.uint16 ) #still compiles for 10-bit image?
for i in data:
data[i] = int( bin( data[i] )[2:].zfill(16), 2 )
data = numpy.asarray( data )
This converts 0000001011
to 0000000000001011
as a string, then converts that string back to a base-10 integer.
However - when I open my image in ImageJ after the conversion, it still looks funky. I'm assuming this is because 0000001011
and 0000000000001011
and 1011
are still the same number. How do I basically enforce greater memory allocation?
I also tried to swap the byte order by literally swapping the bytes:
for i in data:
data[i] = int( bin( data[i] )[2:].zfill(16)[::-1], 2 )
This will transpose my string so 0001
becomes 1000
. The result is the same. Still a funny-looking image in ImageJ.
It's as simple as this:
import cv2
import numpy as np
# Define parameters
w, h, bytesPerPixel = 1392, 1040, 2
# Load image
data = np.fromfile('10bit_1392x1040_10Frames.raw', dtype=np.uint16, offset=0, count=w*h).reshape((h,w)) << 6
# Save
cv2.imwrite('result.png', data)
If your data is 12-bit, change the left-shift from 6 to 4. If 14-bit, change the left-shift from 6 to 2.
If you want the Nth frame, change the offset to N * w * h * bytesPerPixel
because it is measured in bytes not samples.