pythonnibabel

How to determine the x, y, z axes in NIfTI volumes using NiBabel?


I am currently working on the MSD dataset (Task 01: Brain Segmentation) and I am trying to figure out which axis corresponds to x, y, and z (depth/slices). I am using nibabel to manipulate the volumes in the dataset.

nii_file = "/kaggle/input/msd-dataset-task-1-brain-tumour segmentation/Task01_BrainTumour/imagesTr/BRATS_001.nii"


img = nib.load(nii_file)


print("File:", nii_file)
print("Shape:", img.shape)        
print("Data type:", img.get_data_dtype())
print("Voxel spacing:", img.header.get_zooms())  
print("Affine matrix:\n", img.affine)

File: /kaggle/input/msd-dataset-task-1-brain-tumour-segmentation/Task01_BrainTumour/imagesTr/BRATS_001.nii Shape: (240, 240, 155, 4) Data type: float32 Voxel spacing: (1.0, 1.0, 1.0, 1.0) Affine matrix: [[1. 0. 0. 0.] [0. 1. 0. 0.] [0. 0. 1. 0.] [0. 0. 0. 1.]]

Is there a way to find out these axes from the object's header? If not, is there a general approach to determine the axes?

I have tried reading the NiBabel documentation, but I could not find anything related (maybe I missed it). Thanks in advance.


Solution

  • I found Coordinate Systems and Affines and Working with NIfTI Images very helpful doc pages.

    You're on the right track to orientating yourself to the data. A note on terminology, in numpy x, y, z are 1st, 2nd, and 3rd axis respectively. In anatomy, like neuroimaging, convention is ofter that x=left/right, y=front/back, and z=top/bottom. Scanners use different systems, so in addition to mapping array indices to anatomical orientation, getting space labels might be needed. RAS is a common space orientation, indicating left->Right, posterior (back)->Anterior (front), and inferior->Superior.

    Next I would suggest visualizing the data, I think that is helpful in creating your mental model of what is happening in the 3D array. Below is an adjusted snippet from the Coordinate System page that I have tailored based on your provided info.

    import nibabel as nib
    import matplotlib.pyplot as plt
    
    nii_file = "/kaggle/input/msd-dataset-task-1-brain-tumour segmentation/Task01_BrainTumour/imagesTr/BRATS_001.nii"
    
    img = nib.load(nii_file)
    
    img_data = img.get_fdata() # get just data as array
    
    img_data.shape
    # (240, 240, 155, 4) 
    # np x, y, z, t (time most likely with neuroimaging)
    
    img_data_3d = img_data[..., 0]  # take first volume of 4D
    
    # pick central voxel to focus slices
    x, y, z = np.array(img_data_3d.shape[:3]) // 2
    
    def show_slices(slices, labels=['x', 'y', 'z']):
       """ Function to display row of image slices """
       fig, axes = plt.subplots(1, len(slices))
       for i, slice in enumerate(slices):
           axes[i].imshow(slice.T, cmap="gray", origin="lower")
           axes[i].set_title(f'Numpy Axis {label[i]}')
    
    
    slices = [data[x, :, :], data[:, y, :], data[:, :, z]]
    show_slices(slices)
    plt.suptitle("Center slices for image") 
    plt.show()
    
    

    After visualizing the data, you can explore more about orientation and spaces with tools on this page.

    Example:

    # Affine matrix
    print(img.affine)
    
    # Orientation codes (like RAS)
    print(nib.aff2axcodes(img.affine))