pythonpyfits

Size of FITS file different before and after processing


I have problem with file size after processing...I wrote script which create edited image...(From raw image data deduct flat field image data and dark image data)...Here is code Convert float NumPy array to big endian And my problem is...At the beggining I have fits file with size 2.8MiB with type >2i...After processing I have fits file with 11MiB and type float64 and I dont know why? In IDL is some fix method http://www.exelisvis.com/docs/FIX.html . In Python I use imgg=imgg.astype(np.int16,copy=False).So I get image file with 2.8MiB but only in white a black color...

Any suggestion please?


Solution

  • From http://docs.astropy.org/en/stable/io/fits/appendix/faq.html#why-is-an-image-containing-integer-data-being-converted-unexpectedly-to-floats

    If the header for your image contains non-trivial values for the optional BSCALE and/or BZERO keywords (that is, BSCALE != 1 and/or BZERO != 0), then the raw data in the file must be rescaled to its physical values according to the formula:

    physical_value = BZERO + BSCALE * array_value
    

    As BZERO and BSCALE are floating point values, the resulting value must be a float as well. If the original values were 16-bit integers, the resulting values are single-precision (32-bit) floats. If the original values were 32-bit integers the resulting values are double-precision (64-bit floats).

    This automatic scaling can easily catch you of guard if you’re not expecting it, because it doesn’t happen until the data portion of the HDU is accessed (to allow things like updating the header without rescaling the data). For example:

    >>> hdul = fits.open('scaled.fits')
    >>> image = hdul['SCI', 1]
    >>> image.header['BITPIX']
    32
    >>> image.header['BSCALE']
    2.0
    >>> data = image.data  # Read the data into memory
    >>> data.dtype
    dtype('float64')  # Got float64 despite BITPIX = 32 (32-bit int)
    >>> image.header['BITPIX']  # The BITPIX will automatically update too
    -64
    >>> 'BSCALE' in image.header  # And the BSCALE keyword removed
    False
    

    The reason for this is that once a user accesses the data they may also manipulate it and perform calculations on it. If the data were forced to remain as integers, a great deal of precision is lost. So it is best to err on the side of not losing data, at the cost of causing some confusion at first.

    If the data must be returned to integers before saving, use the ImageHDU.scale method:

    >>> image.scale('int32')
    >>> image.header['BITPIX']
    32
    

    Alternatively, if a file is opened with mode='update' along with the scale_back=True argument, the original BSCALE and BZERO scaling will be automatically re-applied to the data before saving. Usually this is not desirable, especially when converting from floating point back to unsigned integer values. But this may be useful in cases where the raw data needs to be modified corresponding to changes in the physical values.

    To prevent rescaling from occurring at all (good for updating headers–even if you don’t intend for the code to access the data, it’s good to err on the side of caution here), use the do_not_scale_image_data argument when opening the file:

    >>> hdul = fits.open('scaled.fits', do_not_scale_image_data=True)
    >>> image = hdul['SCI', 1]
    >>> image.data.dtype
    dtype('int32')