I am trying different models for the same dataset, being autokeras.ImageClassifier one of them. First I go for
img_size = (100,120,3)
train_dataset = get_dataset(x_train, y_train, img_size[:-1], 128)
valid_dataset = get_dataset(x_valid, y_valid, img_size[:-1], 128)
test_dataset = get_dataset(x_test, y_test, img_size[:-1], 128)
For getting the dataset with a predefined function and then I fit the model with an early stopping callback:
# - Crear la red
model = ak.ImageClassifier(overwrite=True, max_trials=1, metrics=['accuracy'])
# - Entrena la red
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=2) #El resto de valores por defecto
history = model.fit(train_dataset, epochs=10, validation_data=valid_dataset, callbacks=[early_stop])
# - EvalĂșa la red
model.evaluate(test_dataset)
The problem is that when train stops because of the callback, history is None type, what means is an empty object. I have not been able to find anything similar in the internet, for everyone it seems to work properly. I know the problem is with the callback because I fit the model without any callback it works properly.
The output when the train is ended by the callback is this one:
Trial 1 Complete [00h 13m 18s]
val_loss: 4.089305400848389
Best val_loss So Far: 4.089305400848389
Total elapsed time: 00h 13m 18s
WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _update_step_xla while saving (showing 3 of 3). These functions will not be directly callable after loading.
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.3
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.4
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.5
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.6
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.7
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.8
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.9
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.10
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.11
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.12
In case someone gets here looking for an answer, it seems to be a problem with the current version of autokeras and the callbacks.