Practical Tuning Series - Build an Automated Machine Learning System

mlr3tuning tuning optimization nested resampling mlr3pipelines automl pima data set classification practical tuning series

We implement a simple automated machine learning (AutoML) system which includes preprocessing, a switch between multiple learners and hyperparameter tuning.

Marc Becker , Theresa Ullmann , Michel Lang , Bernd Bischl , Jakob Richter , Martin Binder
03-11-2021

Scope

This is the third part of the practical tuning series. The other parts can be found here:

In this post, we implement a simple automated machine learning (AutoML) system which includes preprocessing, a switch between multiple learners and hyperparameter tuning. For this, we build a pipeline with the mlr3pipelines extension package. Additionally, we use nested resampling to get an unbiased performance estimate of our AutoML system.

Prerequisites

We load the mlr3verse package which pulls in the most important packages for this example.

We initialize the random number generator with a fixed seed for reproducibility, and decrease the verbosity of the logger to keep the output clearly represented. The lgr package is used for logging in all mlr3 packages. The mlr3 logger prints the logging messages from the base package, whereas the bbotk logger is responsible for logging messages from the optimization packages (e.g. mlr3tuning ).

set.seed(7832)
lgr::get_logger("mlr3")$set_threshold("warn")
lgr::get_logger("bbotk")$set_threshold("warn")

In this example, we use the Pima Indians Diabetes data set which is used to to predict whether or not a patient has diabetes. The patients are characterized by 8 numeric features and some have missing values.

task = tsk("pima")

Branching

We use three popular machine learning algorithms: k-nearest-neighbors, support vector machines and random forests.

learners = list(
  lrn("classif.kknn", id = "kknn"),
  lrn("classif.svm", id = "svm", type = "C-classification"),
  lrn("classif.ranger", id = "ranger")
)

The PipeOpBranch allows us to specify multiple alternatives paths. In this graph, the paths lead to the different learner models. The selection hyperparameter controls which path is executed i.e., which learner is used to fit a model. It is important to use the PipeOpBranch after the branching so that the outputs are merged into one result object. We visualize the graph with branching below.

graph = 
  po("branch", options = c("kknn", "svm", "ranger")) %>>%
  gunion(lapply(learners, po)) %>>%
  po("unbranch")
graph$plot()

Alternatively, we can use the ppl()-shortcut to load a predefined graph from the mlr_graphs dictionary. For this, the learner list must be named.

learners = list(
  kknn = lrn("classif.kknn", id = "kknn"),
  svm = lrn("classif.svm", id = "svm", type = "C-classification"),
  ranger = lrn("classif.ranger", id = "ranger")
)

graph = ppl("branch", lapply(learners, po))

Preprocessing

The task has missing data in five columns.

round(task$missings() / task$nrow, 2)
diabetes      age  glucose  insulin     mass pedigree pregnant pressure 
    0.00     0.00     0.01     0.49     0.01     0.00     0.00     0.05 
 triceps 
    0.30 

The pipeline "robustify" function creates a preprocessing pipeline based on our task. The resulting pipeline imputes missing values with PipeOpImputeHist and creates a dummy column (PipeOpMissInd) which indicates the imputed missing values. Internally, this creates two paths and the results are combined with PipeOpFeatureUnion. In contrast to PipeOpBranch, both paths are executed. Additionally, "robustify" adds PipeOpEncode to encode factor columns and PipeOpRemoveConstants to remove features with a constant value.

graph = ppl("robustify", task = task, factors_to_numeric = TRUE) %>>%
  graph
plot(graph)

We could also create the preprocessing pipeline manually.

gunion(list(po("imputehist"), po("missind", affect_columns = selector_type(c("numeric", "integer"))))) %>>%
  po("featureunion") %>>%
  po("encode") %>>%
  po("removeconstants")
Graph with 5 PipeOps:
              ID         State        sccssors          prdcssors
      imputehist <<UNTRAINED>>    featureunion                   
         missind <<UNTRAINED>>    featureunion                   
    featureunion <<UNTRAINED>>          encode imputehist,missind
          encode <<UNTRAINED>> removeconstants       featureunion
 removeconstants <<UNTRAINED>>                             encode

Graph Learner

We create a GraphLearner which encapsulates the pipeline and can be used like a learner.

graph_learner = GraphLearner$new(graph)

The parameter set of the graph learner includes all hyperparameters from all contained learners. The hyperparameter ids are prefixed with the corresponding learner ids. The first hyperparameter branch.selection controls which learner is used.

print(graph_learner$param_set)
<ParamSetCollection>
                                     id    class lower upper nlevels
 1:                    branch.selection ParamFct    NA    NA       3
 2:               encode.affect_columns ParamUty    NA    NA     Inf
 3:                       encode.method ParamFct    NA    NA       5
 4:           imputehist.affect_columns ParamUty    NA    NA     Inf
 5:                       kknn.distance ParamDbl     0   Inf     Inf
 6:                              kknn.k ParamInt     1   Inf     Inf
 7:                         kknn.kernel ParamFct    NA    NA      10
 8:                          kknn.scale ParamLgl    NA    NA       2
 9:                        kknn.ykernel ParamUty    NA    NA     Inf
10:              missind.affect_columns ParamUty    NA    NA     Inf
11:                        missind.type ParamFct    NA    NA       4
12:                       missind.which ParamFct    NA    NA       2
13:                        ranger.alpha ParamDbl  -Inf   Inf     Inf
14:       ranger.always.split.variables ParamUty    NA    NA     Inf
15:                ranger.class.weights ParamDbl  -Inf   Inf     Inf
16:                      ranger.holdout ParamLgl    NA    NA       2
17:                   ranger.importance ParamFct    NA    NA       4
18:                   ranger.keep.inbag ParamLgl    NA    NA       2
19:                    ranger.max.depth ParamInt  -Inf   Inf     Inf
20:                ranger.min.node.size ParamInt     1   Inf     Inf
21:                     ranger.min.prop ParamDbl  -Inf   Inf     Inf
22:                      ranger.minprop ParamDbl  -Inf   Inf     Inf
23:                         ranger.mtry ParamInt     1   Inf     Inf
24:            ranger.num.random.splits ParamInt     1   Inf     Inf
25:                  ranger.num.threads ParamInt     1   Inf     Inf
26:                    ranger.num.trees ParamInt     1   Inf     Inf
27:                    ranger.oob.error ParamLgl    NA    NA       2
28:        ranger.regularization.factor ParamUty    NA    NA     Inf
29:      ranger.regularization.usedepth ParamLgl    NA    NA       2
30:                      ranger.replace ParamLgl    NA    NA       2
31:    ranger.respect.unordered.factors ParamFct    NA    NA       3
32:              ranger.sample.fraction ParamDbl     0     1     Inf
33:                  ranger.save.memory ParamLgl    NA    NA       2
34: ranger.scale.permutation.importance ParamLgl    NA    NA       2
35:                    ranger.se.method ParamFct    NA    NA       2
36:                         ranger.seed ParamInt  -Inf   Inf     Inf
37:         ranger.split.select.weights ParamDbl     0     1     Inf
38:                    ranger.splitrule ParamFct    NA    NA       2
39:                      ranger.verbose ParamLgl    NA    NA       2
40:                 ranger.write.forest ParamLgl    NA    NA       2
41:             removeconstants.abs_tol ParamDbl     0   Inf     Inf
42:      removeconstants.affect_columns ParamUty    NA    NA     Inf
43:           removeconstants.na_ignore ParamLgl    NA    NA       2
44:               removeconstants.ratio ParamDbl     0     1     Inf
45:             removeconstants.rel_tol ParamDbl     0   Inf     Inf
46:                       svm.cachesize ParamDbl  -Inf   Inf     Inf
47:                   svm.class.weights ParamUty    NA    NA     Inf
48:                           svm.coef0 ParamDbl  -Inf   Inf     Inf
49:                            svm.cost ParamDbl     0   Inf     Inf
50:                           svm.cross ParamInt     0   Inf     Inf
51:                 svm.decision.values ParamLgl    NA    NA       2
52:                          svm.degree ParamInt     1   Inf     Inf
53:                          svm.fitted ParamLgl    NA    NA       2
54:                           svm.gamma ParamDbl     0   Inf     Inf
55:                          svm.kernel ParamFct    NA    NA       4
56:                              svm.nu ParamDbl  -Inf   Inf     Inf
57:                           svm.scale ParamUty    NA    NA     Inf
58:                       svm.shrinking ParamLgl    NA    NA       2
59:                       svm.tolerance ParamDbl     0   Inf     Inf
60:                            svm.type ParamFct    NA    NA       2
                                     id    class lower upper nlevels
             default           parents            value
 1:   <NoDefault[3]>                               kknn
 2:    <Selector[1]>                                   
 3:   <NoDefault[3]>                            one-hot
 4:   <NoDefault[3]>                                   
 5:                2                                   
 6:                7                                   
 7:          optimal                                   
 8:             TRUE                                   
 9:                                                    
10:    <Selector[1]>                      <Selector[1]>
11:   <NoDefault[3]>                             factor
12:   <NoDefault[3]>                      missing_train
13:              0.5                                   
14:   <NoDefault[3]>                                   
15:                                                    
16:            FALSE                                   
17:   <NoDefault[3]>                                   
18:            FALSE                                   
19:                                                    
20:                1                                   
21:              0.1                                   
22:              0.1                                   
23:   <NoDefault[3]>                                   
24:                1  ranger.splitrule                 
25:                1                                  1
26:              500                                   
27:             TRUE                                   
28:                1                                   
29:            FALSE                                   
30:             TRUE                                   
31:           ignore                                   
32:   <NoDefault[3]>                                   
33:            FALSE                                   
34:            FALSE ranger.importance                 
35:          infjack                                   
36:                                                    
37:   <NoDefault[3]>                                   
38:             gini                                   
39:             TRUE                                   
40:             TRUE                                   
41:   <NoDefault[3]>                              1e-08
42:    <Selector[1]>                                   
43:   <NoDefault[3]>                               TRUE
44:   <NoDefault[3]>                                  0
45:   <NoDefault[3]>                              1e-08
46:               40                                   
47:                                                    
48:                0        svm.kernel                 
49:                1          svm.type                 
50:                0                                   
51:            FALSE                                   
52:                3        svm.kernel                 
53:             TRUE                                   
54:   <NoDefault[3]>        svm.kernel                 
55:           radial                                   
56:              0.5          svm.type                 
57:             TRUE                                   
58:             TRUE                                   
59:            0.001                                   
60: C-classification                   C-classification
             default           parents            value

Tune the pipeline

We will only tune one hyperparameter for each learner in this example. Additionally, we tune the branching parameter which selects one of the three learners. We have to specify that a hyperparameter is only valid for a certain learner by using depends = branch.selection == <learner_id>.

# branch
graph_learner$param_set$values$branch.selection = to_tune(c("kknn", "svm", "ranger"))

# kknn
graph_learner$param_set$values$kknn.k = to_tune(p_int(3, 50, logscale = TRUE, depends = branch.selection == "kknn"))

# svm
graph_learner$param_set$values$svm.cost = to_tune(p_dbl(-1, 1, trafo = function(x) 10^x, depends = branch.selection == "svm"))

# ranger
graph_learner$param_set$values$ranger.mtry = to_tune(p_int(1, 8, depends = branch.selection == "ranger"))

# short learner id for printing
graph_learner$id = "graph_learner"

We define a tuning instance and select a random search which is stopped after 20 evaluated configurations.

instance = tune(
  method = "random_search", 
  task = task, 
  learner = graph_learner, 
  resampling = rsmp("cv", folds = 3), 
  measure = msr("classif.ce"),
  term_evals = 20
)

The following shows a quick way to visualize the tuning results.

autoplot(instance, type = "marginal", cols_x = c("x_domain_kknn.k","x_domain_svm.cost", "ranger.mtry"))

Final Model

We add the optimized hyperparameters to the graph learner and train the learner on the full dataset.

learner = GraphLearner$new(graph)
learner$param_set$values = instance$result_learner_param_vals
learner$train(task)

The trained model can now be used to make predictions on new data. A common mistake is to report the performance estimated on the resampling sets on which the tuning was performed (instance$result_y) as the model’s performance. Instead we have to use nested resampling to get an unbiased performance estimate.

Nested Resampling

We use nested resampling to get an unbiased estimate of the predictive performance of our graph learner.

graph_learner = GraphLearner$new(graph)
graph_learner$param_set$values$branch.selection = to_tune(c("kknn", "svm", "ranger"))
graph_learner$param_set$values$kknn.k = to_tune(p_int(3, 50, logscale = TRUE, depends = branch.selection == "kknn"))
graph_learner$param_set$values$svm.cost = to_tune(p_dbl(-1, 1, trafo = function(x) 10^x, depends = branch.selection == "svm"))
graph_learner$param_set$values$ranger.mtry = to_tune(p_int(1, 8, depends = branch.selection == "ranger"))
graph_learner$id = "graph_learner"

inner_resampling = rsmp("cv", folds = 3)
at = AutoTuner$new(
  learner = graph_learner,
  resampling = inner_resampling,
  measure = msr("classif.ce"),
  terminator = trm("evals", n_evals = 10),
  tuner = tnr("random_search")
)

outer_resampling = rsmp("cv", folds = 3)
rr = resample(task, at, outer_resampling, store_models = TRUE)

We check the inner tuning results for stable hyperparameters. This means that the selected hyperparameters should not vary too much. We might observe unstable models in this example because the small data set and the low number of resampling iterations might introduce too much randomness. Usually, we aim for the selection of stable hyperparameters for all outer training sets.

   kknn.k  svm.cost ranger.mtry branch.selection learner_param_vals  x_domain
1:     NA        NA           6           ranger         <list[12]> <list[2]>
2:     NA 0.1235806          NA              svm         <list[12]> <list[2]>
3:     NA        NA           5           ranger         <list[12]> <list[2]>
   classif.ce
1:  0.2344341
2:  0.2284371
3:  0.2596950

Next, we want to compare the predictive performances estimated on the outer resampling to the inner resampling. Significantly lower predictive performances on the outer resampling indicate that the models with the optimized hyperparameters overfit the data.

rr$score()
                task task_id         learner          learner_id
1: <TaskClassif[46]>    pima <AutoTuner[41]> graph_learner.tuned
2: <TaskClassif[46]>    pima <AutoTuner[41]> graph_learner.tuned
3: <TaskClassif[46]>    pima <AutoTuner[41]> graph_learner.tuned
           resampling resampling_id iteration              prediction
1: <ResamplingCV[19]>            cv         1 <PredictionClassif[19]>
2: <ResamplingCV[19]>            cv         2 <PredictionClassif[19]>
3: <ResamplingCV[19]>            cv         3 <PredictionClassif[19]>
   classif.ce
1:  0.2539062
2:  0.2578125
3:  0.2148438

The aggregated performance of all outer resampling iterations is essentially the unbiased performance of the graph learner with optimal hyperparameter found by random search.

rr$aggregate()
classif.ce 
 0.2421875 

Applying nested resampling can be shortened by using the tune_nested()-shortcut.

graph_learner = GraphLearner$new(graph)
graph_learner$param_set$values$branch.selection = to_tune(c("kknn", "svm", "ranger"))
graph_learner$param_set$values$kknn.k = to_tune(p_int(3, 50, logscale = TRUE, depends = branch.selection == "kknn"))
graph_learner$param_set$values$svm.cost = to_tune(p_dbl(-1, 1, trafo = function(x) 10^x, depends = branch.selection == "svm"))
graph_learner$param_set$values$ranger.mtry = to_tune(p_int(1, 8, depends = branch.selection == "ranger"))
graph_learner$id = "graph_learner"

rr = tune_nested(
  method = "random_search",
  task = task,
  learner = graph_learner, 
  inner_resampling = rsmp ("cv", folds = 3),
  outer_resampling = rsmp("cv", folds = 3), 
  measure = msr("classif.ce"),
  term_evals = 10,
)

Resources

The mlr3book includes chapters on pipelines and hyperparameter tuning. The mlr3cheatsheets contain frequently used commands and workflows of mlr3.

Citation

For attribution, please cite this work as

Becker, et al. (2021, March 11). mlr3gallery: Practical Tuning Series - Build an Automated Machine Learning System. Retrieved from https://mlr3gallery.mlr-org.com/posts/2021-03-11-practical-tuning-series-build-an-automated-machine-learning-system/

BibTeX citation

@misc{becker2021practical,
  author = {Becker, Marc and Ullmann, Theresa and Lang, Michel and Bischl, Bernd and Richter, Jakob and Binder, Martin},
  title = {mlr3gallery: Practical Tuning Series - Build an Automated Machine Learning System},
  url = {https://mlr3gallery.mlr-org.com/posts/2021-03-11-practical-tuning-series-build-an-automated-machine-learning-system/},
  year = {2021}
}