I'm trying to perform some MRI segmentation using a deep learning model, but I'm getting a error related to the dimension of the image, not sure why.
import numpy as np
import nibabel as nib
import matplotlib.pyplot as plt
img = nib.load('/content/drive/My Drive/Programa2/P1_FL_final.nii.gz')
%matplotlib inline
img_np = img.get_fdata()
print(type(img_np),img_np.shape)
#Plotting slice of the image
img_slice= img.get_fdata()[:,:,20]
plt.imshow(img_slice,cmap='gray')
#Make prediction
img_analised=img_np
#img_analised=img_np[:,:,:] I was trying to change dimensions
print(img_analised.shape) #Image shape (480, 512, 30)
newmodel.predict(img_analised)
Error message
ValueError: Input 0 of layer conv2d is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 512, 30]
The problem was related to the input image shape, the code was asking for 4 differents modalities of MRI and I was using less modalities. When I changed it, it was ok.