h2osparkling-water

How to make H2OGridSearch for H2OGradientBoostingEstimator repeatable (Reproducibility) in spark environment?


I am using the following code to run GBM in Sparkling Water. I have set up the seed and score_each_iteration, but every time, it still generates different results when I check the AUC even though I have set the seed and score_each_iteration=True.

from h2o.grid.grid_search import H2OGridSearch
from h2o.estimators.gbm import H2OGradientBoostingEstimator

# initialize the estimator
gbm_cov = H2OGradientBoostingEstimator(sample_rate = 0.7, col_sample_rate = 0.7, ntrees = 1000, balance_classes=True , score_each_iteration=True, nfolds=5, seed = 1234)

# set up hyper parameter search space
gbm_hyper_params = {'learn_rate': [0.01, 0.015, 0.025, 0.05, 0.1],
                     'max_depth': [3, 5, 7, 9, 12],
                     #'sample_rate': [i * 0.1 for i in range(6, 11)],
                     #'col_sample_rate': [i * 0.1 for i in range(6, 11)],
                     #'ntrees': [i * 100 for i in range(1, 11)]
                }

# define Search criteria
gbm_search_criteria = {'strategy': "RandomDiscrete", 
                        'max_models': 10, 
                        'max_runtime_secs': 1800,
                        'stopping_metric': eval_metric, 
                        'stopping_tolerance': 0.001, 
                        'stopping_rounds': 3,
                        'seed': 1
                       }

# build grid search 
gbm_grid = H2OGridSearch(model = gbm_cov,
                     hyper_params = gbm_hyper_params,
                     search_criteria = gbm_search_criteria # we can use "Cartesian" if search space is small
                    )

# train using the grid
gbm_grid.train(x = top_feature, y = y, training_frame =htrain)

Solution

  • comment out the 'max_runtime_secs': 1800 can solve the reproducibility issue. One more thing I found out but I don't know why is that if we move early stopping code from search criteria to H2OGradientBoostingEstimator, the code will run faster.

    'stopping_metric': eval_metric, 
    'stopping_tolerance': 0.001, 
    'stopping_rounds': 3,