numpybinaryimagejraw

Numpy workaround for dtype=uint10, uint12, uint14 for RAW image import


Importing 16-bit unsigned raw (image) data into python is simple:

data = numpy.fromfile( source_file, dtype=numpy.uint16 )
data = numpy.asarray( data )

But I must also import 10, 12, and 14-bit unsigned integer data so I can encode those data into a proprietary image format. The format simply accepts an array of 16 or 32-bit integers with a unique header. Numpy has no implementation for 10, 12, and 14-bit data types.

When I open 10, 12, or 14-bit RAW in ImageJ as a 16-bit unsigned image (this is the smallest available unsigned bit depth for ImageJ), it looks funny, but the pixels are all in the correct location. This is the wrong bit depth, so I'm not surprised.

However, if I open the RAW in ImageJ with little endian byte order, the 10-bit image displays perfectly as a 16-bit unsigned image inside ImageJ. So I try this in Python:

image_bitness = numpy.uint16
image_bitness = numpy.dtype( image_bitness ).newbyteorder( '<' )
data = numpy.fromfile( source_file, dtype=image_bitness )
data = numpy.asarray( data )

Swapping the byte order to little endian has no effect in python. The image still looks funny when I conduct the image conversion in python into the new format, then open this new image in ImageJ.

I have implemented this same file conversion for 16 and 32-bit unsigned RAW and it works perfectly.

So to summarize:

Example 10-bit images

Incorrectly displayed pixels without "little endian" in ImageJ. Images converted in Python into new file format look like this: Incorrectly displayed without "little endian", ImageJ Correctly displayed pixels with "little endian" in ImageJ: Correctly displayed with "little endian", ImageJ

Appendix: Padding base-2 binary to get 16-bits

I thought I could pad out my base-2 binary number with 0s, then convert back to base-10 integer. I tried this:

data = numpy.fromfile( source_file, dtype=numpy.uint16 ) #still compiles for 10-bit image?    
for i in data:
      data[i] = int( bin( data[i] )[2:].zfill(16), 2 )
data = numpy.asarray( data ) 

This converts 0000001011 to 0000000000001011 as a string, then converts that string back to a base-10 integer.

However - when I open my image in ImageJ after the conversion, it still looks funky. I'm assuming this is because 0000001011 and 0000000000001011 and 1011 are still the same number. How do I basically enforce greater memory allocation?

I also tried to swap the byte order by literally swapping the bytes:

for i in data:
      data[i] = int( bin( data[i] )[2:].zfill(16)[::-1], 2 )

This will transpose my string so 0001 becomes 1000. The result is the same. Still a funny-looking image in ImageJ.


Solution

  • It's as simple as this:

    import cv2
    import numpy as np
    
    # Define parameters
    w, h, bytesPerPixel = 1392, 1040, 2
    
    # Load image
    data = np.fromfile('10bit_1392x1040_10Frames.raw', dtype=np.uint16, offset=0, count=w*h).reshape((h,w)) << 6
    
    # Save
    cv2.imwrite('result.png', data)
    

    If your data is 12-bit, change the left-shift from 6 to 4. If 14-bit, change the left-shift from 6 to 2.

    If you want the Nth frame, change the offset to N * w * h * bytesPerPixel because it is measured in bytes not samples.