Should I ensemble XGB models trained on the same data but with different parameters? I have N XGBRegressor models trained on the same data, but each has different parameters, for example:
Model 1:
Mean validation score: 0.497 (std: 0.433)
Parameters: {'alpha': 7, 'colsample': 0.263807428519774, 'eta': 0.3060158986459771, 'lambda': 9, 'max_depth': 3, 'min_child_weight': 8, 'subsample': 0.9574396763321433}
Model 2:
Mean validation score: 0.496 (std: 0.409)
Parameters: {'alpha': 10, 'colsample': 0.46293171278876444, 'eta': 0.060401759236472174, 'lambda': 1, 'max_depth': 5, 'min_child_weight': 7, 'subsample': 0.9262228216285202}
Model 3:
Mean validation score: 0.495 (std: 0.406)
Parameters: {'alpha': 1, 'colsample': 0.9232002538248327, 'eta': 0.6040805280556929, 'lambda': 5, 'max_depth': 4, 'min_child_weight': 11, 'subsample': 0.9419219597299463}
If so, how would I go about doing that?
credit to @ImSo3k
sklearn.ensemble.StackingRegressor(estimators, final_estimator=None, *, cv=None, n_jobs=None, passthrough=False, verbose=0)
estimators will be a list of your regressors