pythonscikit-learnkeras-tuner

Unclear scoring metric of keras-tuner's SklearnTuner


In the docs from keras-tuner's SklearnTuner, one can find:

"Note that for this Tuner, the objective for the Oracle should always be set to Objective('score', direction='max')"

When setting the argument "scoring=metrics.make_scorer(metrics.mean_squared_error)" (which is equivalent to "neg_mean_squared_error" according to the docs from Sklearn), keras-tuner prints the "Best score So Far" after every trial. For these values, I expect to find solely negative values (in order to maximize the scoring function). However, what I end up with is only positive scores. I am curious to the logic behind this?

My keras-tuner Tuner class currently looks like this:

    tuner = kt.tuners.SklearnTuner(
        oracle=kt.oracles.BayesianOptimizationOracle(
            objective=kt.Objective('score', 'max'),
        hypermodel=self.build_model,
        scoring=metrics.make_scorer(metrics.mean_squared_error),
        cv=KFold(5),)

Solution

  • My tuner tunes a Sklearn "LinearSVR" model, which has a default scoring of r2. As this scoring measures the model fit, one wants to maximize this value. In the tuner above, I customized the scoring to mean squared error, which is a function one wants to minimize. Although the table in https://scikit-learn.org/stable/modules/model_evaluation.html might seem confusing (at least by me), the "neg_mean_squared_error" scoring is not equivalent to what the metrics.mean_squared_error scroring function does by default. How I perceive it, is that the "neg_mean_squared_error" just makes use of the function. The keras-tuner docs line:

    "Note that for this Tuner, the objective for the Oracle should always be set to Objective('score', direction='max')"

    is not right in the statement that it should always be set to "max".