tensorflowkerasdeep-learningimagedatagenerator

How do I use multiple train_generators/validation_generators in model.fit?


I have 4 datasets, coming from 4 different dataframes. 2 of the datasets are used to predict image aesthetic scores, while the other 2 are used to predict image quality scores. I want to train a model that can predict the scores respectively, using 4 separate inputs, and outputting 4 separate output scores. I use InceptionResNetv2 as my base model.

model = Model(inputs=[input_aesthetic1, input_aesthetic2, input_quality1, input_quality2], outputs=[output_aesthetic1, output_aesthetic2, output_quality1, output_quality2])

Hence, I decided to use ImageDataGenerators to input images from 4 different directories. This is how I prepared them for all 4 datasets. Do take note that, even though all of them have the same ID column for x_col, they have different naming formats, as they come from different datasets.

# preprocess the images in train-validation-test, do for all dataset
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
val_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)

# ava
ava_train_generator = train_datagen.flow_from_dataframe(
    dataframe=ava_train_df,
    directory=ava_images, 
    x_col="ID", 
    y_col="scaled_MOS_aesthetic", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)

ava_test_generator = test_datagen.flow_from_dataframe(
    dataframe=ava_test_df, 
    directory=ava_images, 
    x_col="ID", 
    y_col="scaled_MOS_aesthetic", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)

ava_val_generator = val_datagen.flow_from_dataframe(
    dataframe=ava_val_df, 
    directory=ava_images, 
    x_col="ID", 
    y_col="scaled_MOS_aesthetic", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)
print('AVA generators complete\n')

# para
para_train_generator = train_datagen.flow_from_dataframe(
    dataframe=para_train_df,
    directory=para_images, 
    x_col="ID", 
    y_col="scaled_MOS_aesthetic", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)

para_test_generator = test_datagen.flow_from_dataframe(
    dataframe=para_test_df, 
    directory=para_images, 
    x_col="ID", 
    y_col="scaled_MOS_aesthetic", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)

para_val_generator = val_datagen.flow_from_dataframe(
    dataframe=para_val_df, 
    directory=para_images, 
    x_col="ID", 
    y_col="scaled_MOS_aesthetic", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)
print('PARA generators complete\n')

# koniq
koniq_train_generator = train_datagen.flow_from_dataframe(
    dataframe=koniq_train_df,
    directory=koniq_images, 
    x_col="ID", 
    y_col="scaled_MOS_quality", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)

koniq_test_generator = test_datagen.flow_from_dataframe(
    dataframe=koniq_test_df, 
    directory=koniq_images, 
    x_col="ID", 
    y_col="scaled_MOS_quality", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)

koniq_val_generator = val_datagen.flow_from_dataframe(
    dataframe=koniq_val_df, 
    directory=koniq_images, 
    x_col="ID", 
    y_col="scaled_MOS_quality", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)
print('KoNIQ generators complete\n')

# spaq
spaq_train_generator = train_datagen.flow_from_dataframe(
    dataframe=spaq_train_df,
    directory=spaq_images, 
    x_col="ID", 
    y_col="scaled_MOS_quality", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)

spaq_test_generator = test_datagen.flow_from_dataframe(
    dataframe=spaq_test_df, 
    directory=spaq_images, 
    x_col="ID", 
    y_col="scaled_MOS_quality", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)

spaq_val_generator = val_datagen.flow_from_dataframe(
    dataframe=spaq_val_df, 
    directory=spaq_images, 
    x_col="ID", 
    y_col="scaled_MOS_quality", 
    class_mode="raw", 
    target_size=(224, 224), 
    batch_size=32
)
print('SPAQ generators complete\n')

This was the output from the ImageDataGenerator:

Found 2040 validated image filenames.
Found 2039 validated image filenames.
AVA generators complete

Found 4482 validated image filenames.
Found 961 validated image filenames.
Found 961 validated image filenames.
PARA generators complete

Found 5756 validated image filenames.
Found 1234 validated image filenames.
Found 1233 validated image filenames.
KoNIQ generators complete

Found 8243 validated image filenames.
Found 1767 validated image filenames.
Found 1767 validated image filenames.
SPAQ generators complete

I then proceeded to combine these generators using zip() to combine the data from multiple generators. I tried to model.fit():

history = model.fit(x=zip(ava_train_generator, para_train_generator, koniq_train_generator, spaq_train_generator),
                    steps_per_epoch = max(steps_per_epoch1, steps_per_epoch2, steps_per_epoch3, steps_per_epoch4),
                    epochs = config.epoch,
                    validation_data = zip(ava_val_generator, para_val_generator, koniq_val_generator, spaq_val_generator),
                    validation_steps = max(val_steps1, val_steps2, val_steps3, val_steps4),
                    callbacks = [
                      model_checkpoint_callback,
                      early_stopping_callback
                    ])

But an error occurred:

"name": "ValueError",
    "message": "Data is expected to be in format `x`, `(x,)`, `(x, y)`, or `(x, y, sample_weight)`

I checked all target sizes, and its all the same. What would be the issue in this case and how should I create an ImageDataGenerator from separate directories in this scenario?


Solution

  • I found that it is possible to create a combined generator for training, test, and validation sets through the use of yield in a function. It worked out perfectly as I intended it to be.

    def combined_generator(gen1, gen2, gen3, gen4):
        while True:
            batch1 = next(gen1)
            batch2 = next(gen2)
            batch3 = next(gen3)
            batch4 = next(gen4)
            inputs = [batch1[0], batch2[0], batch3[0], batch4[0]]
            targets = [batch1[1], batch2[1], batch3[1], batch4[1]]
            yield inputs, targets
    

    By creating this function, I can then create a list of images, mapped to their respective scores. Ex: image batch1[0] is mapped to batch1[1] score, image batch2[0] is mapped to batch2[1] score, etc.

    combined_train_gen = combined_generator(ava_train_generator, para_train_generator, koniq_train_generator, spaq_train_generator)
    
    combined_val_gen = combined_generator(ava_val_generator, para_val_generator, koniq_val_generator, spaq_val_generator)
    

    All thats left is to insert the combined_train_gen and combined_val_gen to model.fit()

    history = model.fit(x=combined_train_gen,
                        steps_per_epoch = max(steps_per_epoch1, steps_per_epoch2, steps_per_epoch3, steps_per_epoch4),
                        epochs = config.epoch,
                        validation_data = combined_val_gen,
                        validation_steps = max(val_steps1, val_steps2, val_steps3, val_steps4),
                        callbacks = [
                          model_checkpoint_callback,
                          early_stopping_callback
                        ])