machine-learningdeep-learningpre-trained-model

How to normalize image with its own mean and standard deviation during training in deep learning networks


I am training VGG16 network with medical image dataset using pytorch. All images are present in one folder. Following code snippet shows, how I am pre-processing all images.

test_data = torchvision.datasets.ImageFolder(
        root=TRAIN_ROOT, #path to training folder
        transform=transforms.Compose([
                      transforms.Resize((224,224)),
                      transforms.ToTensor(), 
                      transforms.Normalize(mean=[0.549, 0.815, 0.779], # mean of entire training set
                                            std=[0.408, 0.159, 0.268]) # std. dev. of entire training set
        ])
)

Here I am using mean and standard deviation of entire training set to normalize each image. Now, I want to normalize each image with its own mean and standard deviation. How to do that ? I want help to complete this task.


Solution

  • I think you can do something like this:

    class NormalizeImage(object):
        def __call__(self, img):
            mean = torch.mean(img)
            std = torch.std(img)
            return (img - mean) / std
    
    transform = transforms.Compose([
        transforms.Resize((224,224)),
        transforms.ToTensor(),
        NormalizeImage()
    ])