pythontensorflowdeep-learningtf.kerasdropout

How to extract features from a cnn model using its actuall weights after training?


First, I trained Alexnet on Cifar10 and got 80% as accuracy. But, I want to extract features from the last dropout layer using the weights that gave the 80% accuracy. Here is the model

Alexnet=keras.Sequential([
keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding="same", input_shape=(32,32,3)),
keras.layers.BatchNormalization(),
keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation='relu'),
keras.layers.MaxPool2D(pool_size=(2,2)),
keras.layers.Dropout(0.2),
keras.layers.Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding="same"),
keras.layers.BatchNormalization(),
keras.layers.Conv2D(filters=64, kernel_size=(1,1), activation='relu'),
keras.layers.BatchNormalization(),
keras.layers.MaxPool2D(pool_size=(2,2)),
keras.layers.Dropout(0.2),
keras.layers.Flatten(),
keras.layers.Dense(1024,activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation='softmax')    
])

and here is how i wanted to extract features (outputs) from the last dropout layer

feature_extractor = keras.Model(
  inputs=Alexnet.inputs,
  outputs=Alexnet.get_layer(name="dropout_2").output,
)

I wanna do this using model's weights after training. Could anyone help me please?

Thanks in advance,


Solution

  • You can build your model as follows:

     inputs = keras.layers.Input(shape = (32,32,3))
     x = keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation='relu', 
                             padding="same")(x)
     x = keras.layers.BatchNormalization()(x)
     x = keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation='relu')(x)
     x = keras.layers.MaxPool2D(pool_size=(2,2))(x)
     x = keras.layers.Dropout(0.2)(x)
     x = keras.layers.Conv2D(filters=64, kernel_size=(3,3), activation='relu', 
                             padding="same")(x)
     x = keras.layers.BatchNormalization()(x)
     x = keras.layers.Conv2D(filters=64, kernel_size=(1,1), activation='relu')(x)
     x = keras.layers.BatchNormalization()(x)
     x = keras.layers.MaxPool2D(pool_size=(2,2))(x)
     x = keras.layers.Dropout(0.2)(x)
     x = keras.layers.Flatten()(x)
     x = keras.layers.Dense(1024,activation='relu')(x)
     x = keras.layers.Dropout(0.2)(x)
     intermediary_model = keras.Model(inputs, x)
     x = keras.layers.Dense(10, activation='softmax')(x)
     model = keras.Model(inputs,x)
    

    Then you only train the final model and the intermediary_model will automatically learn the same weights as your final model. And you can access the feature map you want through intermediary_model.predict(some_input)