pythonscikit-learnxgboostparameter-tuning

Why do I get different results every time I run parameter tuning?


Can anyone explain to me why, every time I run the model and tune hyperparameters using RandomSearch, I get good parameters and apply them, it yields a good result. But when I run it all again, tuning parameters, even though they are the same as before, the accuracy is lower than last time? Can anyone explain this to me?

XGBoost - Here is my best result with this model, but any time I run it again, the result is lower than that.

this picture is my best result

Can anyone explain how to get a better result?


Solution

  • You say you used Random Search. Random Search is a technique that selects random combinations of hyperparameters for your model during the tuning process. Since it's random, the chosen combinations might be different each time you run your code, which can lead to varying accuracy results.

    To make sure you get consistent results each time you run your code, you can set a "random state" or "seed". This is a fixed number that ensures the random process starts from the same point, which helps in producing the same results every time.

    To get the same results every time, you not only have to set the random state for the Random Search, but anywhere, where sampling is performed: E.g. when calling train_test_split, when calling StratifiedKFold or when training your XGBClassifier.