I have observed an issue while using the Hyperband algorithm in Optuna. According to the Hyperband algorithm, when min_resources = 5, max_resources = 20, and reduction_factor = 2, the search should start with an initial space of 4 models for bracket 1, with each model receiving 5 epochs in the first round. Subsequently, the number of models is reduced by a factor of 2 in each round and search space should also reduced by factor of 2 for next brackets i.e bracket 2 will have initial search space of 2 models, and the number of epochs for the remaining models is doubled in each subsequent round. so total models should be 11 is expected but it is training lot's of models.
link of the article:- https://arxiv.org/pdf/1603.06560.pdf
import optuna
import numpy as np
import pandas as pd
from tensorflow.keras.layers import Dense,Flatten,Dropout
import tensorflow as tf
from tensorflow.keras.models import Sequential
# Toy dataset generation
def generate_toy_dataset():
np.random.seed(0)
X_train = np.random.rand(100, 10)
y_train = np.random.randint(0, 2, size=(100,))
X_val = np.random.rand(20, 10)
y_val = np.random.randint(0, 2, size=(20,))
return X_train, y_train, X_val, y_val
X_train, y_train, X_val, y_val = generate_toy_dataset()
# Model building function
def build_model(trial):
model = Sequential()
model.add(Dense(units=trial.suggest_int('unit_input', 20, 30),
activation='selu',
input_shape=(X_train.shape[1],)))
num_layers = trial.suggest_int('num_layers', 2, 3)
for i in range(num_layers):
units = trial.suggest_int(f'num_layer_{i}', 20, 30)
activation = trial.suggest_categorical(f'activation_layer_{i}', ['relu', 'selu', 'tanh'])
model.add(Dense(units=units, activation=activation))
if trial.suggest_categorical(f'dropout_layer_{i}', [True, False]):
model.add(Dropout(rate=0.5))
model.add(Dense(1, activation='sigmoid'))
optimizer_name = trial.suggest_categorical('optimizer', ['adam', 'rmsprop'])
if optimizer_name == 'adam':
optimizer = tf.keras.optimizers.Adam()
else:
optimizer = tf.keras.optimizers.RMSprop()
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy', tf.keras.metrics.AUC(name='val_auc')])
return model
def objective(trial):
model = build_model(trial)
# Assuming you have your data prepared
# Modify the fit method to include AUC metric
history = model.fit(X_train, y_train, validation_data=(X_val, y_val), verbose=1)
# Check if 'val_auc' is recorded
auc_key = None
for key in history.history.keys():
if key.startswith('val_auc'):
auc_key = key
print(f"auc_key is {auc_key}")
break
if auc_key is None:
raise ValueError("AUC metric not found in history. Make sure it's being recorded during training.")
# Report validation AUC for each model
if auc_key =="val_auc":
step=0
else:
step = int(auc_key.split('_')[-1])
auc_value=history.history[auc_key][0]
trial.report(auc_value, step=step)
print(f"prune or not:-{trial.should_prune()}")
if trial.should_prune():
raise optuna.TrialPruned()
return history.history[auc_key]
# Optuna study creation
study = optuna.create_study(
direction='maximize',
pruner=optuna.pruners.HyperbandPruner(
min_resource=5,
max_resource=20,
reduction_factor=2
)
)
# Start optimization
study.optimize(objective)
You are using the default value of the parameter n_trials
in the study.optimize
function, which is None
. According to the documentation, that means that it will stop evaluating configurations when it "times out".
Optuna's Hyperband implementation is not identical to what was described in the original article. It has some tweaks to make the algorithm compatible with Optuna's inner workings.
You can check the number of successive halving brackets like this: study.pruner._n_brackets
. And you can check the allocated budget to each bracket like this: study.pruner._trial_allocation_budgets
.
What I am still trying to figure out is how the n_trials
plays into defining the number of configurations that will be examined at each bracket.