deep-learningimage-segmentationsemantic-segmentationmedical-imaging

U-Net segmentation without having mask


I am new to deep learning and Semantic segmentation.

I have a dataset of medical images (CT) in Dicom format, in which I need to segment tumours and organs involved from the images. I have labelled organs contoured by our physician which we call it RT structure stored in Dicom format also.

As far as I know, people usually use "mask". Does it mean I need to convert all the contoured structure in the rt structure to mask? or I can use the information from the RT structure (.dcm) directly as my input?

Thanks for your help.


Solution

  • There is a special library called pydicom that you need to install before you can actually decode and later visualise the X-ray image.

    Now, since you want to apply semantic segmentation and you want to segment the tumours, the solution to this is to create a neural network which accepts as input a pair of [image,mask], where, say, all the locations in the mask are 0 except for the zones where the tumour is, which are marked with 1; practically your ground truth is the mask.

    Of course for this you will have to implement your CustomDataGenerator() which must yield at every step a batch of [image,mask] pairs as stated above.