anacondaspyderitkunet-neural-network

Why Python stops to work on GPU when using SimpleITK library in MONAI transforms?


I'm using Python 3.9 with Spyder 5.2.2 (Anaconda) for a U-Net segmentation task with MONAI. After importing all the images in a dictionary, I create these lines to define pre-process steps:


import SimpleITK as sitk
from monai.inferers import SimpleInferer
from monai.transforms import (
    AsDiscrete,
    DataStatsd,
    AddChanneld,
    Compose,
    Activations,
    LoadImaged,
    Resized,
    RandFlipd,
    ScaleIntensityRanged,
    DataStats,
    AsChannelFirstd,
    AsDiscreted,
    ToTensord,
    EnsureTyped,
    RepeatChanneld,
    EnsureType
)

from monai.transforms import Transform


monai_load = [
    LoadImaged(keys=["image","segmentation"],image_only=False,reader=PILReader()),
    EnsureTyped(keys=["image", "segmentation"], data_type="numpy"),
    AddChanneld(keys=["segmentation","image"]),   
    RepeatChanneld(keys=["image"],repeats=3),
    AsChannelFirstd(keys=["image"], channel_dim = 0), 
    ]


monai_transforms =[
    AsDiscreted(keys=["segmentation"],threshold=0.5),
    ToTensord(keys=["image","segmentation"]),
    ]

class N4ITKTransform(Transform):

    def __call__(self,image):
        filtered = []
        for channel in image["image"]:
            inputImage = sitk.GetImageFromArray(channel)
            inputImage = sitk.Cast(inputImage, sitk.sitkFloat32)
            corrector = sitk.N4BiasFieldCorrectionImageFilter()
            outputImage = corrector.Execute(inputImage)
            filtered.append(sitk.GetArrayFromImage(outputImage))
        image["image"] = np.stack(filtered)
        
        return image

train_transforms = Compose(monai_load + [N4ITKTransform()] + monai_transforms)

When i recall these transforms with Compose and apply them to the train images, python does not work on GPU despite

torch.cuda.is_available()

return True.

These are the lines where I apply the transforms:

train_ds = IterableDataset(data = train_data, transform = train_transforms)
train_loader = DataLoader(dataset = train_ds, batch_size = batch_size, num_workers = 0, pin_memory = True)

When I define the U-Net model, I send it to 'cuda'.

The problem is in the SimpleITK transform. If I don't use them, Python works on GPU as usual.

Thank you in advance for getting back to me.

Federico


Solution

  • The answer is simple: SimpleITK uses CPU for processing.

    I am not sure whether it is possible to get it to use some of the GPU-accelerated filters from ITK (its base library). If you use ITK Python, you have the possibility to use GPU-filters. But only a few filters have GPU implementations. N4BiasFieldCorrection does NOT have a GPU implementation. So if you want to use this filter, it needs to be done on the CPU.