tensorflowmachine-learningkerasdeep-learningnormalization

How to add InstanceNormalization on Tensorflow/keras


I am new to TensorFlow and Keras, I have been making a dilated resnet and wanted to add instance normalization on a layer but I could not as it keeps throwing errors.

I am using tensorflow 1.15 and keras 2.1. I commented out the BatchNormalization part which works and I tried to add instance normalization but it cannot find the module.

Thanks a lot for your suggestions



from keras.layers import Conv2D
from keras.layers.normalization import BatchNormalization
from keras.optimizers import Nadam, Adam
from keras.layers import Input, Dense, Reshape, Activation, Flatten, Embedding, Dropout, Lambda, add, concatenate, Concatenate, ConvLSTM2D, LSTM, average, MaxPooling2D, multiply, MaxPooling3D
from keras.layers import GlobalAveragePooling2D, Permute
from keras.layers.advanced_activations import LeakyReLU, PReLU
from keras.layers.convolutional import UpSampling2D, Conv2D, Conv1D
from keras.models import Sequential, Model
from keras.utils import multi_gpu_model
from keras.utils.generic_utils import Progbar
from keras.constraints import maxnorm
from keras.activations import tanh, softmax
from keras import metrics, initializers, utils, regularizers
import tensorflow as tf
import numpy as np
import math
import os
import sys
import random
import keras.backend as K
epsilon = K.epsilon()


def basic_block_conv2D_norm_elu(filters, kernel_size, kernel_regularizer=regularizers.l2(1e-4),act_func="elu", normalize="Instance",   dropout='0.15',
                                strides=1,use_bias = True,kernel_initializer = "he_normal",_dilation_rate=0):
    def f(input):
        if kernel_regularizer == None:
                if _dilation_rate == 0:
                    conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
                                   padding="same",    use_bias=use_bias)(input)
                else:
                    conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
                                  padding="same", use_bias=use_bias,dilation_rate=_dilation_rate)(input)
        else:
            if _dilation_rate == 0:
                conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
                              kernel_initializer=kernel_initializer, padding="same", use_bias=use_bias,
                              kernel_regularizer=kernel_regularizer)(input)
            else:
                conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
                              kernel_initializer=kernel_initializer, padding="same", use_bias=use_bias,
                              kernel_regularizer=kernel_regularizer, dilation_rate=_dilation_rate)(input)
        if dropout != None:
            dropout_layer = Dropout(0.15)(conv)

        if normalize == None and dropout != None:
            norm_layer = conv(dropout_layer)
        else:
            norm_layer = InstanceNormalization()(dropout_layer)
#            norm_layer = BatchNormalization()(dropout_layer)
        return Activation(act_func)(norm_layer)
    return f

Solution

  • There is no such thing as InstanceNormalization(). In Keras you do not have a separate layer for InstanceNormalisation. (Which doesn't mean that you can't apply InstanceNormalisation )

    In Keras we have tf.keras.layers.BatchNormalization layer which can be used to apply any type of normalization.

    This layer has following parameters:

        axis=-1,
        momentum=0.99,
        epsilon=0.001,
        center=True,
        scale=True,
        beta_initializer="zeros",
        gamma_initializer="ones",
        moving_mean_initializer="zeros",
        moving_variance_initializer="ones",
        beta_regularizer=None,
        gamma_regularizer=None,
        beta_constraint=None,
        gamma_constraint=None,
        **kwargs
    )
    

    Now you can change your axis parameter to generate the Instance normalisation Layer or any other type of normalisation.

    The formula for BatchNormalisation and Instance Normalisation is given as: enter image description here

    Now, Let's Assume you have Channel first implementation i.e. [B,C,H,W] If you want to calculate BatchNormalisation then you need to give your channel axis as the axis in the BatchNormalisation() layer. In this case it will calculate C means and standard deviations

    BatchNormalisation layer : tf.keras.layers.BatchNormalization(axis=1)

    And If you want to calculate InstanceNormalisation then Just give set your axis as the axis of Batch and Channel. In this case it will calculate B*C means and standard deviations

    InstanceNormalisation layer: tf.keras.layers.BatchNormalization(axis=[0,1])

    Update 1

    While using batch Normalisation you must keep training =1 if you want to use it as InstanceNormalisation

    Update 2

    You can directly use the inbuilt InstanceNormalisation given as below

    https://www.tensorflow.org/addons/api_docs/python/tfa/layers/InstanceNormalization