I built a cnn model that classifies facial moods as happy , sad, energetic and neutral faces. I used Vgg16 pre-trained model and freezed all layers. After 50 epoch of training my model's test accuracy is 0.65 validatation loss is about 0.8 .
My train data folder has 16000(4x4000) , validation data folder has 2000(4x500) and Test data folder has 4000(4x1000) rgb images.
What is your suggestion to increase the model accuracy?
I have tried to do some prediction with my model , predicted class is always same. What can cause the problem?
What I Have Tried So Far:
But I could not the increase validation and test accuracy.
My Codes
train_src = "/content/drive/MyDrive/Affectnet/train_class/"
val_src = "/content/drive/MyDrive/Affectnet/val_class/"
test_src="/content/drive/MyDrive/Affectnet/test_classs/"
train_datagen = tensorflow.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
)
train_generator = train_datagen.flow_from_directory(
train_src,
target_size=(224,224 ),
batch_size=32,
class_mode='categorical',
shuffle=True
)
validation_datagen = tensorflow.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255
)
validation_generator = validation_datagen.flow_from_directory(
val_src,
target_size=(224, 224),
batch_size=32,
class_mode='categorical',
shuffle=True
)
conv_base = tensorflow.keras.applications.VGG16(weights='imagenet',
include_top=False,
input_shape=(224, 224, 3)
)
for layer in conv_base.layers:
layer.trainable = False
model = tensorflow.keras.models.Sequential()
# VGG16 is added as convolutional layer.
model.add(conv_base)
# Layers are converted from matrices to a vector.
model.add(tensorflow.keras.layers.Flatten())
# Our neural layer is added.
model.add(tensorflow.keras.layers.Dropout(0.5))
model.add(tensorflow.keras.layers.Dense(256, activation='relu'))
model.add(tensorflow.keras.layers.Dense(4, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=tensorflow.keras.optimizers.Adam(lr=1e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
epochs=50,
steps_per_epoch=100,
validation_data=validation_generator,
validation_steps=5,
workers=8
)
Well a few things. For training set you say you have 16,0000 images. However with a batch size of 32 and steps_per_epoch= 100 then for any given epoch you are only training on 3,200 images. Similarly you have 2000 validation images, but with a batch size of 32 and validation_steps = 5 you are only validating on 5 X 32 = 160 images. Now Vgg is an OK model but I don't use it because it is very large which increases the training time significantly and there are other models out there for transfer learning that are smaller and even more accurate. I suggest you try using EfficientNetB3. Use the code
conv_base = tensorflow.keras.applications.EfficientNetB3(weights='imagenet',
include_top=False,
input_shape=(224, 224, 3)
pooling='max'
)
with pooling='max' you can eliminate the Flatten layer. Also EfficientNet models expect pixels in the range 0 to 255 so remove the rescale=1/255 in your generators. Next thing to do is to use an adjustable learning rate. This can be done using Keras callbacks. Documentation for that is here. You want to use the ReduceLROnPlateau callback. Documentation for that is here. Set it up to monitor validation loss. My suggested code for that is below
rlronp=tf.keras.callbacks.ReduceLROnPlateau(monitor="val_loss",factor=0.5,
patience=1, verbose=1)
I also recommend you use the callback EarlyStopping. Documentation for that is here. . My recomended code for that is shown below
estop=tf.keras.callbacks.EarlyStopping( monitor="val_loss", patience=4, verbose=1,
restore_best_weights=True)
Now in model.fit include
callbacks=[rlronp, estop]
set your learning rate to .001. Set epochs=50. The estop callback if tripped will return your model loaded with the weights from the epoch with the lowest validation loss. I notice you have the code
for layer in conv_base.layers:
layer.trainable = False
I know the tutorials tell you to do that but I get better results leaving it trainable and I have done this on hundreds of models.