Below, I have created mlr3 graph and trained it on sample dataset.
I know how to create predictions for final ste (regression average), but is it possible to get predictions for models before averaging? The goal is to compare individual model performance with final model.
Bonus question, can individual models be autotuners themselves and if yes, how to incorporate them in pipeline?
library(mlr3)
library(mlr3learners)
library(mlr3pipelines)
library(mlr3tuning)
# task
mlr_tasks
task = tsk("mtcars")
# graph
learners_l = list(
ranger = lrn("regr.ranger", id = "ranger"),
xgboost = lrn("regr.xgboost", id = "xgboost")
)
choices = c("ranger", "lm", "kknn", "xgboost")
learners = gunion(learners_l) %>>%
po("regravg", innum = length(learners_l))
# create graph
graph = gunion(list(
ranger = lrn("regr.ranger", id = "ranger"),
xgboost = lrn("regr.xgboost", id = "xgboost")
)) %>>%
po("regravg", innum = length(learners_l))
plot(graph)
graph_learner = as_learner(graph)
# search space
search_space = ps(
ranger.ranger.max.depth = p_fct(levels = c(2L, 3L))
)
# auto tuner
at = auto_tuner(
tuner = tnr("grid_search"),
learner = graph_learner,
resampling = rsmp("holdout"),
measure = msr("regr.mse"),
search_space = search_space,
term_evals = 2,
store_models = TRUE
)
# train
at$train(task)
# IS IT POSSIBLE TO EXTRACT PREDICTIONS OF RANGER AND XGBOOST TO COMPARE IT WITH ENSAMBLE?
The trained learners are located in the $graph_model
slot.
at$model$learner$graph_model$pipeops$ranger.ranger$predict(list(task))
at$model$learner$graph_model$pipeops$xgboost.xgboost$predict(list(task))
Bonus question, can individual models be autotuners themselves and if yes, how to incorporate them in pipeline?
Yes. AutoTuner
objects are learners and can be used like learners. You have to decide whether this makes sense.