Practical Tuning Series - Tune a Preprocessing Pipeline

mlr3tuning tuning optimization nested resampling mlr3pipelines pima data set classification practical tuning series

We build a simple preprocessing pipeline and tune it.

Marc Becker , Theresa Ullmann , Michel Lang , Bernd Bischl , Jakob Richter , Martin Binder
03-10-2021

Scope

This is the second part of the practical tuning series. The other parts can be found here:

In this post, we build a simple preprocessing pipeline and tune it. For this, we are using the mlr3pipelines extension package. First, we start by imputing missing values in the Pima Indians Diabetes data set. After that, we encode a factor column to numerical dummy columns in the data set. Next, we combine both preprocessing steps to a Graph and create a GraphLearner. Finally, nested resampling is used to compare the performance of two imputation methods.

Prerequisites

We load the mlr3verse package which pulls in the most important packages for this example.

We initialize the random number generator with a fixed seed for reproducibility, and decrease the verbosity of the logger to keep the output clearly represented. The lgr package is used for logging in all mlr3 packages. The mlr3 logger prints the logging messages from the base package, whereas the bbotk logger is responsible for logging messages from the optimization packages (e.g. mlr3tuning ).

set.seed(7832)
lgr::get_logger("mlr3")$set_threshold("warn")
lgr::get_logger("bbotk")$set_threshold("warn")

In this example, we use the Pima Indians Diabetes data set which is used to predict whether or not a patient has diabetes. The patients are characterized by 8 numeric features of which some have missing values. We alter the data set by categorizing the feature pressure (blood pressure) into the categories "low", "mid", and "high".

# retrieve the task from mlr3
task = tsk("pima")

# create data frame with categorized pressure feature
data = task$data(cols = "pressure")
breaks = quantile(data$pressure, probs = c(0, 0.33, 0.66, 1), na.rm = TRUE)
data$pressure = cut(data$pressure, breaks, labels = c("low", "mid", "high"))

# overwrite the feature in the task
task$cbind(data)

# generate a quick textual overview
skimr::skim(task$data())
Table 1: Data summary
Name task$data()
Number of rows 768
Number of columns 9
Key NULL
_______________________
Column type frequency:
factor 2
numeric 7
________________________
Group variables None

Variable type: factor

skim_variable n_missing complete_rate ordered n_unique top_counts
diabetes 0 1.00 FALSE 2 neg: 500, pos: 268
pressure 36 0.95 FALSE 3 low: 282, mid: 245, hig: 205

Variable type: numeric

skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
age 0 1.00 33.24 11.76 21.00 24.00 29.00 41.00 81.00 ▇▃▁▁▁
glucose 5 0.99 121.69 30.54 44.00 99.00 117.00 141.00 199.00 ▁▇▇▃▂
insulin 374 0.51 155.55 118.78 14.00 76.25 125.00 190.00 846.00 ▇▂▁▁▁
mass 11 0.99 32.46 6.92 18.20 27.50 32.30 36.60 67.10 ▅▇▃▁▁
pedigree 0 1.00 0.47 0.33 0.08 0.24 0.37 0.63 2.42 ▇▃▁▁▁
pregnant 0 1.00 3.85 3.37 0.00 1.00 3.00 6.00 17.00 ▇▃▂▁▁
triceps 227 0.70 29.15 10.48 7.00 22.00 29.00 36.00 99.00 ▆▇▁▁▁

We choose the xgboost algorithm from the xgboost package as learner.

learner = lrn("classif.xgboost", nrounds = 100, id = "xgboost", verbose = 0)

Missing Values

The task has missing data in five columns.

round(task$missings() / task$nrow, 2)
diabetes      age  glucose  insulin     mass pedigree pregnant pressure 
    0.00     0.00     0.01     0.49     0.01     0.00     0.00     0.05 
 triceps 
    0.30 

The xgboost learner has an internal method for handling missing data but some learners cannot handle missing values. We will try to beat the internal method in terms of predictive performance. The mlr3pipelines package offers various methods to impute missing values.

mlr_pipeops$keys("^impute")
[1] "imputeconstant" "imputehist"     "imputelearner"  "imputemean"    
[5] "imputemedian"   "imputemode"     "imputeoor"      "imputesample"  

We choose the PipeOpImputeOOR that adds the new factor level ".MISSING". to factorial features and imputes numerical features by constant values shifted below the minimum (default) or above the maximum.

imputer = po("imputeoor")
print(imputer)
PipeOp: <imputeoor> (not trained)
values: <min=TRUE, offset=1, multiplier=1>
Input channels <name [train type, predict type]>:
  input [Task,Task]
Output channels <name [train type, predict type]>:
  output [Task,Task]

As the output suggests, the in- and output of this pipe operator is a Task for both the training and the predict step. We can manually train the pipe operator to check its functionality:

task_imputed = imputer$train(list(task))[[1]]
task_imputed$missings()
diabetes      age pedigree pregnant  glucose  insulin     mass pressure 
       0        0        0        0        0        0        0        0 
 triceps 
       0 

Let’s compare an observation with missing values to the observation with imputed observation.

rbind(
  task$data()[8,],
  task_imputed$data()[8,]
)
   diabetes age glucose insulin mass pedigree pregnant pressure triceps
1:      neg  29     115      NA 35.3    0.134       10     <NA>      NA
2:      neg  29     115    -819 35.3    0.134       10 .MISSING     -86

Note that OOR imputation is in particular useful for tree-based models, but should not be used for linear models or distance-based models.

Factor Encoding

The xgboost learner cannot handle categorical features. Therefore, we must to convert factor columns to numerical dummy columns. For this, we argument the xgboost learner with automatic factor encoding.

The PipeOpEncode encodes factor columns with one of six methods. In this example, we use one-hot encoding which creates a new binary column for each factor level.

factor_encoding = po("encode", method = "one-hot")

We manually trigger the encoding on the task.

factor_encoding$train(list(task))
$output
<TaskClassif:pima> (768 x 11)
* Target: diabetes
* Properties: twoclass
* Features (10):
  - dbl (10): age, glucose, insulin, mass, pedigree, pregnant,
    pressure.high, pressure.low, pressure.mid, triceps

The factor column pressure has been converted to the three binary columns "pressure.low", "pressure.mid", and "pressure.high".

Constructing the Pipeline

We created two preprocessing steps which could be used to create a new task with encoded factor variables and imputed missing values. However, if we do this before resampling, information from the test can leak into our training step which typically leads to overoptimistic performance measures. To avoid this, we add the preprocessing steps to the Learner itself, creating a GraphLearner. For this, we create a Graph first.

graph = po("encode") %>>%
  po("imputeoor") %>>%
  learner
plot(graph)

We wrap the Graph into GraphLearner which allows us to use the graph like a normal learner.

graph_learner = GraphLearner$new(graph)

# short learner id for printing
graph_learner$id = "graph_learner"

The GraphLearner can be trained and used for making predictions. Instead of calling $train() or $predict() manually, we will directly use it for resampling. We choose a 3-fold cross-validation as the resampling strategy.

resampling = rsmp("cv", folds = 3)

rr = resample(task = task, learner = graph_learner, resampling = resampling)
rr$score()
                task task_id            learner    learner_id
1: <TaskClassif[46]>    pima <GraphLearner[33]> graph_learner
2: <TaskClassif[46]>    pima <GraphLearner[33]> graph_learner
3: <TaskClassif[46]>    pima <GraphLearner[33]> graph_learner
           resampling resampling_id iteration              prediction
1: <ResamplingCV[19]>            cv         1 <PredictionClassif[19]>
2: <ResamplingCV[19]>            cv         2 <PredictionClassif[19]>
3: <ResamplingCV[19]>            cv         3 <PredictionClassif[19]>
   classif.ce
1:  0.2851562
2:  0.2460938
3:  0.2968750

For each resampling iteration, the following steps are performed:

  1. The task is subsetted to the training indices.
  2. The factor encoder replaces factor features with dummy columns in the training task.
  3. The OOR imputer determines values to impute from the training task and then replaces all missing values with learned imputation values.
  4. The learner is applied on the modified training task and the model is stored inside the learner.

Next is the predict step:

  1. The task is subsetted to the test indices.
  2. The factor encoder replaces all factor features with dummy columns in the test task.
  3. The OOR imputer replaces all missing values of the test task with the imputation values learned on the training set.
  4. The learner’s predict method is applied on the modified test task.

By following this procedure, it is guaranteed that no information can leak from the training step to the predict step.

Tuning the Pipeline

Let’s have a look at the parameter set of the GraphLearner. It consists of the xgboost hyperparameters, and additionally, the parameter of the PipeOp encode and imputeoor. All hyperparameters are prefixed with the id of the respective PipeOp or learner.

print(graph_learner$param_set)
<ParamSetCollection>
                                 id    class lower upper nlevels
 1:           encode.affect_columns ParamUty    NA    NA     Inf
 2:                   encode.method ParamFct    NA    NA       5
 3:        imputeoor.affect_columns ParamUty    NA    NA     Inf
 4:                   imputeoor.min ParamLgl    NA    NA       2
 5:            imputeoor.multiplier ParamDbl     0   Inf     Inf
 6:                imputeoor.offset ParamDbl     0   Inf     Inf
 7:                   xgboost.alpha ParamDbl     0   Inf     Inf
 8:           xgboost.approxcontrib ParamLgl    NA    NA       2
 9:              xgboost.base_score ParamDbl  -Inf   Inf     Inf
10:                 xgboost.booster ParamFct    NA    NA       3
11:               xgboost.callbacks ParamUty    NA    NA     Inf
12:       xgboost.colsample_bylevel ParamDbl     0     1     Inf
13:        xgboost.colsample_bynode ParamDbl     0     1     Inf
14:        xgboost.colsample_bytree ParamDbl     0     1     Inf
15:   xgboost.early_stopping_rounds ParamInt     1   Inf     Inf
16:                     xgboost.eta ParamDbl     0     1     Inf
17:             xgboost.eval_metric ParamUty    NA    NA     Inf
18:        xgboost.feature_selector ParamFct    NA    NA       5
19:                   xgboost.feval ParamUty    NA    NA     Inf
20:                   xgboost.gamma ParamDbl     0   Inf     Inf
21:             xgboost.grow_policy ParamFct    NA    NA       2
22: xgboost.interaction_constraints ParamUty    NA    NA     Inf
23:                  xgboost.lambda ParamDbl     0   Inf     Inf
24:             xgboost.lambda_bias ParamDbl     0   Inf     Inf
25:                 xgboost.max_bin ParamInt     2   Inf     Inf
26:          xgboost.max_delta_step ParamDbl     0   Inf     Inf
27:               xgboost.max_depth ParamInt     0   Inf     Inf
28:              xgboost.max_leaves ParamInt     0   Inf     Inf
29:                xgboost.maximize ParamLgl    NA    NA       2
30:        xgboost.min_child_weight ParamDbl     0   Inf     Inf
31:                 xgboost.missing ParamDbl  -Inf   Inf     Inf
32:    xgboost.monotone_constraints ParamInt    -1     1       3
33:          xgboost.normalize_type ParamFct    NA    NA       2
34:                 xgboost.nrounds ParamInt     1   Inf     Inf
35:                 xgboost.nthread ParamInt     1   Inf     Inf
36:              xgboost.ntreelimit ParamInt     1   Inf     Inf
37:       xgboost.num_parallel_tree ParamInt     1   Inf     Inf
38:               xgboost.objective ParamUty    NA    NA     Inf
39:                xgboost.one_drop ParamLgl    NA    NA       2
40:            xgboost.outputmargin ParamLgl    NA    NA       2
41:             xgboost.predcontrib ParamLgl    NA    NA       2
42:               xgboost.predictor ParamFct    NA    NA       2
43:         xgboost.predinteraction ParamLgl    NA    NA       2
44:                xgboost.predleaf ParamLgl    NA    NA       2
45:           xgboost.print_every_n ParamInt     1   Inf     Inf
46:               xgboost.rate_drop ParamDbl     0     1     Inf
47:            xgboost.refresh_leaf ParamLgl    NA    NA       2
48:                 xgboost.reshape ParamLgl    NA    NA       2
49:             xgboost.sample_type ParamFct    NA    NA       2
50:               xgboost.save_name ParamUty    NA    NA     Inf
51:             xgboost.save_period ParamInt     0   Inf     Inf
52:        xgboost.scale_pos_weight ParamDbl  -Inf   Inf     Inf
53:              xgboost.sketch_eps ParamDbl     0     1     Inf
54:               xgboost.skip_drop ParamDbl     0     1     Inf
55:               xgboost.subsample ParamDbl     0     1     Inf
56:                   xgboost.top_k ParamInt     0   Inf     Inf
57:                xgboost.training ParamLgl    NA    NA       2
58:             xgboost.tree_method ParamFct    NA    NA       5
59:  xgboost.tweedie_variance_power ParamDbl     1     2     Inf
60:                 xgboost.updater ParamUty    NA    NA     Inf
61:                 xgboost.verbose ParamInt     0     2       3
62:               xgboost.watchlist ParamUty    NA    NA     Inf
63:               xgboost.xgb_model ParamUty    NA    NA     Inf
                                 id    class lower upper nlevels
            default                                  parents   value
 1:   <Selector[1]>                                                 
 2:  <NoDefault[3]>                                          one-hot
 3:  <NoDefault[3]>                                                 
 4:  <NoDefault[3]>                                             TRUE
 5:  <NoDefault[3]>                                                1
 6:  <NoDefault[3]>                                                1
 7:               0                                                 
 8:           FALSE                                                 
 9:             0.5                                                 
10:          gbtree                                                 
11:       <list[0]>                                                 
12:               1                                                 
13:               1                                                 
14:               1                                                 
15:                                                                 
16:             0.3                                                 
17:           error                                                 
18:          cyclic                          xgboost.booster        
19:                                                                 
20:               0                                                 
21:       depthwise                      xgboost.tree_method        
22:  <NoDefault[3]>                                                 
23:               1                                                 
24:               0                                                 
25:             256                      xgboost.tree_method        
26:               0                                                 
27:               6                                                 
28:               0                      xgboost.grow_policy        
29:                                                                 
30:               1                                                 
31:              NA                                                 
32:               0                                                 
33:            tree                          xgboost.booster        
34:               1                                              100
35:               1                                                1
36:                                                                 
37:               1                                                 
38: binary:logistic                                                 
39:           FALSE                          xgboost.booster        
40:           FALSE                                                 
41:           FALSE                                                 
42:   cpu_predictor                                                 
43:           FALSE                                                 
44:           FALSE                                                 
45:               1                          xgboost.verbose        
46:               0                          xgboost.booster        
47:            TRUE                                                 
48:           FALSE                                                 
49:         uniform                          xgboost.booster        
50:                                                                 
51:                                                                 
52:               1                                                 
53:            0.03                      xgboost.tree_method        
54:               0                          xgboost.booster        
55:               1                                                 
56:               0 xgboost.booster,xgboost.feature_selector        
57:           FALSE                                                 
58:            auto                          xgboost.booster        
59:             1.5                        xgboost.objective        
60:  <NoDefault[3]>                                                 
61:               1                                                0
62:                                                                 
63:                                                                 
            default                                  parents   value

We will tune the encode method.

graph_learner$param_set$values$encode.method = to_tune(c("one-hot", "treatment"))

We define a tuning instance and use grid search since we want to try all encode methods.

instance = tune(
  method = "grid_search", 
  task = task, 
  learner = graph_learner, 
  resampling = resampling, 
  measure = measure
)

The archive shows us the performance of the model with different encoding methods.

print(instance$archive)
<ArchiveTuning>
   encode.method classif.ce           timestamp batch_nr
1:       one-hot       0.27 2021-04-17 05:19:23        1
2:     treatment       0.27 2021-04-17 05:19:25        2

Nested Resampling

We create one GraphLearner with imputeoor and test it against a GraphLearner that uses the internal imputation method of xgboost. Applying nested resampling ensures a fair comparison of the predictive performances.

graph_1 = po("encode") %>>%
  learner
graph_learner_1 = GraphLearner$new(graph_1)

graph_learner_1$param_set$values$encode.method = to_tune(c("one-hot", "treatment"))

at_1 = AutoTuner$new(
  learner = graph_learner_1,
  resampling = resampling,
  measure = msr("classif.ce"),
  terminator = trm("none"),
  tuner = tnr("grid_search"),
  store_models = TRUE
)
graph_2 = po("encode") %>>%
  po("imputeoor") %>>%
  learner
graph_learner_2 = GraphLearner$new(graph_2)

graph_learner_2$param_set$values$encode.method = to_tune(c("one-hot", "treatment"))

at_2 = AutoTuner$new(
  learner = graph_learner_2,
  resampling = resampling,
  measure = msr("classif.ce"),
  terminator = trm("none"),
  tuner = tnr("grid_search"),
  store_models = TRUE
)

We run the benchmark.

resampling_outer = rsmp("cv", folds = 3)
design = benchmark_grid(task, list(at_1, at_2), resampling_outer)

bmr = benchmark(design, store_models = TRUE)

We compare the aggregated performances on the outer test sets which give us an unbiased performance estimate of the GraphLearners with the different encoding methods.

bmr$aggregate()
   nr      resample_result task_id                     learner_id resampling_id
1:  1 <ResampleResult[21]>    pima           encode.xgboost.tuned            cv
2:  2 <ResampleResult[21]>    pima encode.imputeoor.xgboost.tuned            cv
   iters classif.ce
1:     3  0.2695312
2:     3  0.2682292
autoplot(bmr)

Note that in practice, it is required to tune preprocessing hyperparameters jointly with the hyperparameters of the learner. Otherwise, comparing preprocessing steps is not feasible and can lead to wrong conclusions.

Final Model

We train the chosen GraphLearner with the AutoTuner to get a final model with optimized hyperparameters.

at_2$train(task)

The trained model can now be used to make predictions on new data at_2$predict(). The pipeline ensures that the preprocessing is always a part of the train and predict step.

Resources

The mlr3book includes chapters on pipelines and hyperparameter tuning. The mlr3cheatsheets contain frequently used commands and workflows of mlr3.

Citation

For attribution, please cite this work as

Becker, et al. (2021, March 10). mlr3gallery: Practical Tuning Series - Tune a Preprocessing Pipeline. Retrieved from https://mlr3gallery.mlr-org.com/posts/2021-03-10-practical-tuning-series-tune-a-preprocessing-pipeline/

BibTeX citation

@misc{becker2021practical,
  author = {Becker, Marc and Ullmann, Theresa and Lang, Michel and Bischl, Bernd and Richter, Jakob and Binder, Martin},
  title = {mlr3gallery: Practical Tuning Series - Tune a Preprocessing Pipeline},
  url = {https://mlr3gallery.mlr-org.com/posts/2021-03-10-practical-tuning-series-tune-a-preprocessing-pipeline/},
  year = {2021}
}