mlr3tuning Tutorial - German Credit

mlr3tuning tuning german credit

In this use case, we continue working with the German credit dataset. We work on hyperparameter tuning and apply nested resampling.

Martin Binder , Florian Pfisterer
03-11-2020

Intro

This is the second part of a serial of tutorials. The other parts of this series can be found here:

We will continue working with the German credit dataset. In Part I, we peeked into the dataset by using and comparing some learners with their default parameters. We will now see how to:

Prerequisites

First, load the packages we are going to use:

We use the same Task as in Part I:

task = tsk("german_credit")

We also might want to use multiple cores to reduce long run times of tuning runs.

# future::plan("multiprocess") # uncomment for parallelization

Evaluation

We will evaluate all hyperparameter configurations using 10-fold CV. We use a fixed train-test split, i.e. the same splits for each evaluation. Otherwise, some evaluation could get unusually “hard” splits, which would make comparisons unfair.

set.seed(8008135)
cv10_instance = rsmp("cv", folds = 10)

# fix the train-test splits using the $instantiate() method
cv10_instance$instantiate(task)

# have a look at the test set instances per fold
cv10_instance$instance
      row_id fold
   1:      5    1
   2:     20    1
   3:     28    1
   4:     35    1
   5:     37    1
  ---            
 996:    936   10
 997:    950   10
 998:    963   10
 999:    985   10
1000:    994   10

Simple Parameter Tuning

Parameter tuning in mlr3 needs two packages:

  1. The paradox package is used for the search space definition of the hyperparameters
  2. The mlr3tuning package is used for tuning the hyperparameters

Search Space and Problem Definition

First, we need to decide what Learner we want to optimize. We will use LearnerClassifKKNN, the “kernelized” k-nearest neighbor classifier. We will use kknn as a normal kNN without weighting first (i.e., using the rectangular kernel):

knn = lrn("classif.kknn", predict_type = "prob")
knn$param_set$values$kernel = "rectangular"

As a next step, we decide what parameters we optimize over. Before that, though, we are interested in the parameter set on which we could tune:

knn$param_set
<ParamSet>
         id    class lower upper
1:        k ParamInt     1   Inf
2: distance ParamDbl     0   Inf
3:   kernel ParamFct    NA    NA
4:    scale ParamLgl    NA    NA
5:  ykernel ParamUty    NA    NA
                                                           levels default
1:                                                                      7
2:                                                                      2
3: rectangular,triangular,epanechnikov,biweight,triweight,cos,... optimal
4:                                                     TRUE,FALSE    TRUE
5:                                                                       
         value
1:            
2:            
3: rectangular
4:            
5:            

We first tune the k parameter (i.e. the number of nearest neighbors), between 3 to 20. Second, we tune the distance function, allowing L1 and L2 distances. To do so, we use the paradox package to define a search space (see the online vignette for a more complete introduction.

search_space = ParamSet$new(list(
  ParamInt$new("k", lower = 3, upper = 20),
  ParamInt$new("distance", lower = 1, upper = 2)
))

As a next step, we define a TuningInstanceSingleCrit that represents the problem we are trying to optimize.

instance_grid = TuningInstanceSingleCrit$new(
  task = task,
  learner = knn,
  resampling = cv10_instance,
  measure = msr("classif.ce"),
  terminator = trm("none"),
  search_space = search_space
)

After having set up a tuning instance, we can start tuning. Before that, we need a tuning strategy, though. A simple tuning method is to try all possible combinations of parameters: Grid Search. While it is very intuitive and simple, it is inefficient if the search space is large. For this simple use case, it suffices, though. We get the grid_search tuner via:

set.seed(1)
tuner_grid = tnr("grid_search", resolution = 18, batch_size = 36)

Tuning works by calling $optimize(). Note that the tuning procedure modifies our tuning instance (as usual for R6 class objects). The result can be found in the instance object. Before tuning it is empty:

instance_grid$result
NULL

Now, we tune:

tuner_grid$optimize(instance_grid)
   k distance learner_param_vals  x_domain classif.ce
1: 9        2          <list[3]> <list[2]>       0.25

The result is returned by $optimize() together with its performance. It can be also accessed with the $result slot:

instance_grid$result
   k distance learner_param_vals  x_domain classif.ce
1: 9        2          <list[3]> <list[2]>       0.25

We can also look at the Archive of evaluated configurations:

instance_grid$archive$data()
     k distance classif.ce                                uhash  x_domain
 1:  3        1      0.271 da825d60-13e3-4754-979c-bb83a135eba8 <list[2]>
 2:  3        2      0.273 f530a6fc-364a-4ccc-8b75-38fbc5505999 <list[2]>
 3:  4        1      0.292 47b6fe6c-c47a-41a6-902d-37a4a3dfc2bb <list[2]>
 4:  4        2      0.279 fdbdde74-7eae-42e8-8b25-2daef6b31254 <list[2]>
 5:  5        1      0.271 83249ffd-2990-4b8b-9f16-6f17aabdb915 <list[2]>
 6:  5        2      0.274 c9da0cc4-a100-410a-a59f-d69d929e7a41 <list[2]>
 7:  6        1      0.278 19483ec7-1f83-4e28-b3c4-58103fb99253 <list[2]>
 8:  6        2      0.273 16b1eb7d-5e0c-4cad-bf80-18057060b985 <list[2]>
 9:  7        1      0.257 6a22a9aa-089e-409f-9137-c88ec0276751 <list[2]>
10:  7        2      0.258 41495af2-fa6b-4ad8-8c04-07f1c6cd779f <list[2]>
11:  8        1      0.264 f0b17f7a-05aa-4cfc-b4f5-df2aab99b21d <list[2]>
12:  8        2      0.256 49157002-ccf1-4adf-bec6-98d9252e4a56 <list[2]>
13:  9        1      0.251 e137d220-a2a5-49bd-bd55-8eb6c4afcdce <list[2]>
14:  9        2      0.250 ebd33996-c533-465c-a606-ee62aa93af85 <list[2]>
15: 10        1      0.261 4a3bc358-5be3-4af6-933e-95273e931690 <list[2]>
16: 10        2      0.250 e32cd0c9-20da-47ba-a1cc-4cc6428fd453 <list[2]>
17: 11        1      0.256 f9eee78a-52e2-423e-927c-f008352434b2 <list[2]>
18: 11        2      0.254 8bfa62fc-8cb4-4a62-b8ba-3d93bffe1b17 <list[2]>
19: 12        1      0.260 4c65e50d-b765-42c2-91ad-34196fac373c <list[2]>
20: 12        2      0.259 36e5d8da-229b-4a09-ac4a-ec5ae0f71912 <list[2]>
21: 13        1      0.268 4659c3dc-c153-40fc-8911-e60d450675ad <list[2]>
22: 13        2      0.258 be8802be-d280-4f34-8104-6670c9b23072 <list[2]>
23: 14        1      0.265 df8ce2fe-cfe2-497b-89a5-6712d5a0e04d <list[2]>
24: 14        2      0.263 5f2bf8bb-de2e-48b6-aaf2-eeac24851b38 <list[2]>
25: 15        1      0.268 5b13cdfc-75c5-43de-bae8-6db86c7d0f93 <list[2]>
26: 15        2      0.264 89ed2517-4d66-4965-8b64-7c6c8b8989c1 <list[2]>
27: 16        1      0.267 0a5eae86-e58a-4648-9484-b4bd966432fb <list[2]>
28: 16        2      0.262 d360b935-d5f9-4111-91f0-56cf2f1ebf6b <list[2]>
29: 17        1      0.264 70e3ac4b-554a-44d6-91d9-bbc5d36d76b7 <list[2]>
30: 17        2      0.267 b48be6b8-bd8c-4962-aba6-a5737b9c618e <list[2]>
31: 18        1      0.273 7a7aafd5-1019-4812-a4db-054ac9b7331d <list[2]>
32: 18        2      0.271 18724213-9a00-4fb2-8a63-1789ebfe125e <list[2]>
33: 19        1      0.269 cc3c3546-f5a0-48fa-bc03-897a97d0b5b8 <list[2]>
34: 19        2      0.269 d6fb0f2f-8c6e-4971-88a0-f04afd72f8b8 <list[2]>
35: 20        1      0.268 ecc2a992-6579-4e65-9fda-2710c86474e0 <list[2]>
36: 20        2      0.269 1c2a20d7-5559-4d41-be00-32d22c5b946b <list[2]>
     k distance classif.ce                                uhash  x_domain
              timestamp batch_nr
 1: 2021-01-21 05:14:58        1
 2: 2021-01-21 05:14:58        1
 3: 2021-01-21 05:14:58        1
 4: 2021-01-21 05:14:58        1
 5: 2021-01-21 05:14:58        1
 6: 2021-01-21 05:14:58        1
 7: 2021-01-21 05:14:58        1
 8: 2021-01-21 05:14:58        1
 9: 2021-01-21 05:14:58        1
10: 2021-01-21 05:14:58        1
11: 2021-01-21 05:14:58        1
12: 2021-01-21 05:14:58        1
13: 2021-01-21 05:14:58        1
14: 2021-01-21 05:14:58        1
15: 2021-01-21 05:14:58        1
16: 2021-01-21 05:14:58        1
17: 2021-01-21 05:14:58        1
18: 2021-01-21 05:14:58        1
19: 2021-01-21 05:14:58        1
20: 2021-01-21 05:14:58        1
21: 2021-01-21 05:14:58        1
22: 2021-01-21 05:14:58        1
23: 2021-01-21 05:14:58        1
24: 2021-01-21 05:14:58        1
25: 2021-01-21 05:14:58        1
26: 2021-01-21 05:14:58        1
27: 2021-01-21 05:14:58        1
28: 2021-01-21 05:14:58        1
29: 2021-01-21 05:14:58        1
30: 2021-01-21 05:14:58        1
31: 2021-01-21 05:14:58        1
32: 2021-01-21 05:14:58        1
33: 2021-01-21 05:14:58        1
34: 2021-01-21 05:14:58        1
35: 2021-01-21 05:14:58        1
36: 2021-01-21 05:14:58        1
              timestamp batch_nr

We plot the performances depending on the sampled k and distance:

ggplot(instance_grid$archive$data(), aes(x = k, y = classif.ce, color = as.factor(distance))) +
  geom_line() + geom_point(size = 3)

On average, the Euclidean distance (distance = 2) seems to work better. However, there is much randomness introduced by the resampling instance. So you, the reader, may see a different result, when you run the experiment yourself and set a different random seed. For k, we find that values between 7 and 13 perform well.

Random Search and Transformation

Let’s have a look at a larger search space. For example, we could tune all available parameters and limit k to large values (50). We also now tune the distance param continuously from 1 to 3 as a double and tune distance kernel and whether we scale the features.

We may find two problems when doing so:

First, the resulting difference in performance between k = 3 and k = 4 is probably larger than the difference between k = 49 and k = 50. While 4 is 33% larger than 3, 50 is only 2 percent larger than 49. To account for this we will use a transformation function for k and optimize in log-space. We define the range for k from log(3) to log(50) and exponentiate in the transformation. Now, as k has become a double instead of an int (in the search space, before transformation), we round it in the trafo.

large_searchspace = ParamSet$new(list(
  ParamDbl$new("k", lower = log(3), upper = log(50)),
  ParamDbl$new("distance", lower = 1, upper = 3),
  ParamFct$new("kernel", c("rectangular", "gaussian", "rank", "optimal")),
  ParamLgl$new("scale")
))

large_searchspace$trafo = function(x, param_set) {
  x$k = round(exp(x$k))
  x
}

The second problem is that grid search may (and often will) take a long time. For instance, trying out three different values for k, distance, kernel, and the two values for scale will take 54 evaluations. Because of this, we use a different search algorithm, namely the Random Search. We need to specify in the tuning instance a termination criterion. The criterion tells the search algorithm when to stop. Here, we will terminate after 36 evaluations:

tuner_random = tnr("random_search", batch_size = 36)

instance_random = TuningInstanceSingleCrit$new(
  task = task,
  learner = knn,
  resampling = cv10_instance,
  measure = msr("classif.ce"),
  terminator = trm("evals", n_evals = 36),
  search_space = large_searchspace,
)
tuner_random$optimize(instance_random)
          k distance  kernel scale learner_param_vals  x_domain classif.ce
1: 2.444957  1.54052 optimal  TRUE          <list[4]> <list[4]>      0.249

Like before, we can review the Archive. It includes the points before and after the transformation. The archive includes a column for each parameter the Tuner sampled on the search space (points before the transformation):

instance_random$archive$data()
           k distance      kernel scale classif.ce
 1: 2.654531 2.921236 rectangular FALSE      0.314
 2: 2.588931 1.869319        rank  TRUE      0.254
 3: 3.319396 2.425029 rectangular  TRUE      0.272
 4: 1.164253 1.799989    gaussian FALSE      0.364
 5: 2.441256 1.650704        rank  TRUE      0.253
 6: 3.158912 2.514174     optimal FALSE      0.305
 7: 3.047551 1.405385    gaussian  TRUE      0.257
 8: 2.442352 2.422242    gaussian  TRUE      0.270
 9: 3.521548 1.243384 rectangular  TRUE      0.271
10: 2.331159 1.490977     optimal  TRUE      0.252
11: 1.787328 1.286609    gaussian FALSE      0.345
12: 1.297461 1.479259        rank  TRUE      0.274
13: 1.378451 1.117869 rectangular FALSE      0.357
14: 1.988414 2.284577 rectangular FALSE      0.313
15: 2.557743 2.752538        rank  TRUE      0.267
16: 2.961104 2.557829        rank  TRUE      0.264
17: 2.243193 2.594618 rectangular  TRUE      0.256
18: 3.666907 1.910549 rectangular FALSE      0.292
19: 1.924639 1.820168        rank  TRUE      0.256
20: 2.390153 2.621740     optimal FALSE      0.339
21: 2.033775 2.209867        rank FALSE      0.332
22: 2.929778 2.309448        rank FALSE      0.302
23: 1.824519 1.706395        rank  TRUE      0.261
24: 2.444957 1.540520     optimal  TRUE      0.249
25: 3.254559 2.985368        rank FALSE      0.302
26: 1.335633 2.266987        rank FALSE      0.361
27: 3.561251 1.426416        rank FALSE      0.296
28: 2.052564 1.258745 rectangular FALSE      0.324
29: 3.460303 1.956236    gaussian FALSE      0.296
30: 2.073975 2.848149        rank  TRUE      0.269
31: 2.037658 2.197522    gaussian  TRUE      0.267
32: 2.438784 2.952341 rectangular FALSE      0.308
33: 3.608733 2.463585        rank FALSE      0.294
34: 3.530354 1.713454 rectangular FALSE      0.297
35: 2.195813 1.862947     optimal  TRUE      0.257
36: 3.285535 1.296423        rank FALSE      0.300
           k distance      kernel scale classif.ce
                                   uhash  x_domain           timestamp batch_nr
 1: 7c66699a-e04f-4b21-bdfc-ac222107d676 <list[4]> 2021-01-21 05:15:31        1
 2: f97db152-c2b8-4bb2-afdc-3b8e556f12cc <list[4]> 2021-01-21 05:15:31        1
 3: 0c2d3037-5b7a-4ea3-a614-f166c60a7b24 <list[4]> 2021-01-21 05:15:31        1
 4: ddd66c3b-8b88-498d-9684-50ca0712f008 <list[4]> 2021-01-21 05:15:31        1
 5: 498f0b4a-3d55-4200-90dc-7ed1a7fa35f9 <list[4]> 2021-01-21 05:15:31        1
 6: 2e432936-68bf-408a-a2f5-2b4e89965713 <list[4]> 2021-01-21 05:15:31        1
 7: ef44a19d-875f-42ff-a4fb-5398c9e3e2bf <list[4]> 2021-01-21 05:15:31        1
 8: 9626cfec-6554-40e4-a297-7b5d8a47bbfc <list[4]> 2021-01-21 05:15:31        1
 9: 99b30ee0-3556-4e17-9669-633cbea80209 <list[4]> 2021-01-21 05:15:31        1
10: a4d06946-83ad-4796-976e-e004df8b3693 <list[4]> 2021-01-21 05:15:31        1
11: a5df2c3f-b132-4388-bd73-42847374aee1 <list[4]> 2021-01-21 05:15:31        1
12: 6e75f15e-04a7-4ca5-9475-aeae47feb81c <list[4]> 2021-01-21 05:15:31        1
13: de0b184e-cb7b-4c12-8ef0-ef79a84db37f <list[4]> 2021-01-21 05:15:31        1
14: 37510ca7-1ffd-4935-a837-bf22035fe25f <list[4]> 2021-01-21 05:15:31        1
15: 19e609e8-133b-4b4e-9263-b5c054b8c758 <list[4]> 2021-01-21 05:15:31        1
16: e1d37052-a487-4b9c-a8c5-2cfce41c1cfe <list[4]> 2021-01-21 05:15:31        1
17: 5415e17f-b0b7-4a27-9301-52e9e96071e0 <list[4]> 2021-01-21 05:15:31        1
18: ec3a0c84-eb7b-43e3-8250-58b9eb0dc8cf <list[4]> 2021-01-21 05:15:31        1
19: 4d7d506b-62d1-4192-a17d-1cadb8dcabae <list[4]> 2021-01-21 05:15:31        1
20: 6fe3c6cf-0722-4898-a456-5b5ab1e5bec4 <list[4]> 2021-01-21 05:15:31        1
21: 0292a944-652e-4740-93d8-fac0da51067d <list[4]> 2021-01-21 05:15:31        1
22: 63d4f9b7-1f04-4b9a-87b7-26556b711a45 <list[4]> 2021-01-21 05:15:31        1
23: bdb86312-2bf5-4652-a972-5ac7078c7f6a <list[4]> 2021-01-21 05:15:31        1
24: ee7b4cc8-4f2f-4b58-ad3b-61254bd2510c <list[4]> 2021-01-21 05:15:31        1
25: c1a3073d-aaf5-441c-a344-c3d32b89263d <list[4]> 2021-01-21 05:15:31        1
26: c3802b20-d341-4dd7-b908-828ede3203c7 <list[4]> 2021-01-21 05:15:31        1
27: b27e0a75-3ccd-4548-a714-f897308e179a <list[4]> 2021-01-21 05:15:31        1
28: a4c08e5c-6db6-4ecc-b7bf-4d2b146f4528 <list[4]> 2021-01-21 05:15:31        1
29: fc707f64-174e-49de-bbee-60ec2a825210 <list[4]> 2021-01-21 05:15:31        1
30: b9c7b58c-56ed-4ed5-bac9-8f3d558d7e69 <list[4]> 2021-01-21 05:15:31        1
31: eda524e2-01b2-4d3d-b789-5d3390039556 <list[4]> 2021-01-21 05:15:31        1
32: fe1570f7-225e-472f-bc6f-a480b3057dd3 <list[4]> 2021-01-21 05:15:31        1
33: 9e8b9e78-afcb-47e9-8467-3d75a9e26042 <list[4]> 2021-01-21 05:15:31        1
34: 6c8255a2-513a-49c3-82bf-8980f2b56435 <list[4]> 2021-01-21 05:15:31        1
35: 7d87f1b1-5a3c-45e4-91e6-e7e93df9d3b2 <list[4]> 2021-01-21 05:15:31        1
36: 71e374d6-8f77-45ad-9714-b2d5d80e6f0e <list[4]> 2021-01-21 05:15:31        1
                                   uhash  x_domain           timestamp batch_nr

The parameters used by the learner (points after the transformation) are stored in in the x_domain column as lists. By using unnest = x_domain, the list elements are expanded to separate columns:

instance_random$archive$data(unnest = "x_domain")
           k distance      kernel scale classif.ce
 1: 2.654531 2.921236 rectangular FALSE      0.314
 2: 2.588931 1.869319        rank  TRUE      0.254
 3: 3.319396 2.425029 rectangular  TRUE      0.272
 4: 1.164253 1.799989    gaussian FALSE      0.364
 5: 2.441256 1.650704        rank  TRUE      0.253
 6: 3.158912 2.514174     optimal FALSE      0.305
 7: 3.047551 1.405385    gaussian  TRUE      0.257
 8: 2.442352 2.422242    gaussian  TRUE      0.270
 9: 3.521548 1.243384 rectangular  TRUE      0.271
10: 2.331159 1.490977     optimal  TRUE      0.252
11: 1.787328 1.286609    gaussian FALSE      0.345
12: 1.297461 1.479259        rank  TRUE      0.274
13: 1.378451 1.117869 rectangular FALSE      0.357
14: 1.988414 2.284577 rectangular FALSE      0.313
15: 2.557743 2.752538        rank  TRUE      0.267
16: 2.961104 2.557829        rank  TRUE      0.264
17: 2.243193 2.594618 rectangular  TRUE      0.256
18: 3.666907 1.910549 rectangular FALSE      0.292
19: 1.924639 1.820168        rank  TRUE      0.256
20: 2.390153 2.621740     optimal FALSE      0.339
21: 2.033775 2.209867        rank FALSE      0.332
22: 2.929778 2.309448        rank FALSE      0.302
23: 1.824519 1.706395        rank  TRUE      0.261
24: 2.444957 1.540520     optimal  TRUE      0.249
25: 3.254559 2.985368        rank FALSE      0.302
26: 1.335633 2.266987        rank FALSE      0.361
27: 3.561251 1.426416        rank FALSE      0.296
28: 2.052564 1.258745 rectangular FALSE      0.324
29: 3.460303 1.956236    gaussian FALSE      0.296
30: 2.073975 2.848149        rank  TRUE      0.269
31: 2.037658 2.197522    gaussian  TRUE      0.267
32: 2.438784 2.952341 rectangular FALSE      0.308
33: 3.608733 2.463585        rank FALSE      0.294
34: 3.530354 1.713454 rectangular FALSE      0.297
35: 2.195813 1.862947     optimal  TRUE      0.257
36: 3.285535 1.296423        rank FALSE      0.300
           k distance      kernel scale classif.ce
                                   uhash           timestamp batch_nr
 1: 7c66699a-e04f-4b21-bdfc-ac222107d676 2021-01-21 05:15:31        1
 2: f97db152-c2b8-4bb2-afdc-3b8e556f12cc 2021-01-21 05:15:31        1
 3: 0c2d3037-5b7a-4ea3-a614-f166c60a7b24 2021-01-21 05:15:31        1
 4: ddd66c3b-8b88-498d-9684-50ca0712f008 2021-01-21 05:15:31        1
 5: 498f0b4a-3d55-4200-90dc-7ed1a7fa35f9 2021-01-21 05:15:31        1
 6: 2e432936-68bf-408a-a2f5-2b4e89965713 2021-01-21 05:15:31        1
 7: ef44a19d-875f-42ff-a4fb-5398c9e3e2bf 2021-01-21 05:15:31        1
 8: 9626cfec-6554-40e4-a297-7b5d8a47bbfc 2021-01-21 05:15:31        1
 9: 99b30ee0-3556-4e17-9669-633cbea80209 2021-01-21 05:15:31        1
10: a4d06946-83ad-4796-976e-e004df8b3693 2021-01-21 05:15:31        1
11: a5df2c3f-b132-4388-bd73-42847374aee1 2021-01-21 05:15:31        1
12: 6e75f15e-04a7-4ca5-9475-aeae47feb81c 2021-01-21 05:15:31        1
13: de0b184e-cb7b-4c12-8ef0-ef79a84db37f 2021-01-21 05:15:31        1
14: 37510ca7-1ffd-4935-a837-bf22035fe25f 2021-01-21 05:15:31        1
15: 19e609e8-133b-4b4e-9263-b5c054b8c758 2021-01-21 05:15:31        1
16: e1d37052-a487-4b9c-a8c5-2cfce41c1cfe 2021-01-21 05:15:31        1
17: 5415e17f-b0b7-4a27-9301-52e9e96071e0 2021-01-21 05:15:31        1
18: ec3a0c84-eb7b-43e3-8250-58b9eb0dc8cf 2021-01-21 05:15:31        1
19: 4d7d506b-62d1-4192-a17d-1cadb8dcabae 2021-01-21 05:15:31        1
20: 6fe3c6cf-0722-4898-a456-5b5ab1e5bec4 2021-01-21 05:15:31        1
21: 0292a944-652e-4740-93d8-fac0da51067d 2021-01-21 05:15:31        1
22: 63d4f9b7-1f04-4b9a-87b7-26556b711a45 2021-01-21 05:15:31        1
23: bdb86312-2bf5-4652-a972-5ac7078c7f6a 2021-01-21 05:15:31        1
24: ee7b4cc8-4f2f-4b58-ad3b-61254bd2510c 2021-01-21 05:15:31        1
25: c1a3073d-aaf5-441c-a344-c3d32b89263d 2021-01-21 05:15:31        1
26: c3802b20-d341-4dd7-b908-828ede3203c7 2021-01-21 05:15:31        1
27: b27e0a75-3ccd-4548-a714-f897308e179a 2021-01-21 05:15:31        1
28: a4c08e5c-6db6-4ecc-b7bf-4d2b146f4528 2021-01-21 05:15:31        1
29: fc707f64-174e-49de-bbee-60ec2a825210 2021-01-21 05:15:31        1
30: b9c7b58c-56ed-4ed5-bac9-8f3d558d7e69 2021-01-21 05:15:31        1
31: eda524e2-01b2-4d3d-b789-5d3390039556 2021-01-21 05:15:31        1
32: fe1570f7-225e-472f-bc6f-a480b3057dd3 2021-01-21 05:15:31        1
33: 9e8b9e78-afcb-47e9-8467-3d75a9e26042 2021-01-21 05:15:31        1
34: 6c8255a2-513a-49c3-82bf-8980f2b56435 2021-01-21 05:15:31        1
35: 7d87f1b1-5a3c-45e4-91e6-e7e93df9d3b2 2021-01-21 05:15:31        1
36: 71e374d6-8f77-45ad-9714-b2d5d80e6f0e 2021-01-21 05:15:31        1
                                   uhash           timestamp batch_nr
    x_domain_k x_domain_distance x_domain_kernel x_domain_scale
 1:         14          2.921236     rectangular          FALSE
 2:         13          1.869319            rank           TRUE
 3:         28          2.425029     rectangular           TRUE
 4:          3          1.799989        gaussian          FALSE
 5:         11          1.650704            rank           TRUE
 6:         24          2.514174         optimal          FALSE
 7:         21          1.405385        gaussian           TRUE
 8:         12          2.422242        gaussian           TRUE
 9:         34          1.243384     rectangular           TRUE
10:         10          1.490977         optimal           TRUE
11:          6          1.286609        gaussian          FALSE
12:          4          1.479259            rank           TRUE
13:          4          1.117869     rectangular          FALSE
14:          7          2.284577     rectangular          FALSE
15:         13          2.752538            rank           TRUE
16:         19          2.557829            rank           TRUE
17:          9          2.594618     rectangular           TRUE
18:         39          1.910549     rectangular          FALSE
19:          7          1.820168            rank           TRUE
20:         11          2.621740         optimal          FALSE
21:          8          2.209867            rank          FALSE
22:         19          2.309448            rank          FALSE
23:          6          1.706395            rank           TRUE
24:         12          1.540520         optimal           TRUE
25:         26          2.985368            rank          FALSE
26:          4          2.266987            rank          FALSE
27:         35          1.426416            rank          FALSE
28:          8          1.258745     rectangular          FALSE
29:         32          1.956236        gaussian          FALSE
30:          8          2.848149            rank           TRUE
31:          8          2.197522        gaussian           TRUE
32:         11          2.952341     rectangular          FALSE
33:         37          2.463585            rank          FALSE
34:         34          1.713454     rectangular          FALSE
35:          9          1.862947         optimal           TRUE
36:         27          1.296423            rank          FALSE
    x_domain_k x_domain_distance x_domain_kernel x_domain_scale

Let’s now investigate the performance by parameters. This is especially easy using visualization:

ggplot(instance_random$archive$data(unnest = "x_domain"),
  aes(x = x_domain_k, y = classif.ce, color = x_domain_scale)) +
  geom_point(size = 3) + geom_line()

The previous plot suggests that scale has a strong influence on performance. For the kernel, there does not seem to be a strong influence:

ggplot(instance_random$archive$data(unnest = "x_domain"),
  aes(x = x_domain_k, y = classif.ce, color = x_domain_kernel)) +
  geom_point(size = 3) + geom_line()

Nested Resampling

Having determined tuned configurations that seem to work well, we want to find out which performance we can expect from them. However, this may require more than this naive approach:

instance_random$result_y
classif.ce 
     0.249 
instance_grid$result_y
classif.ce 
      0.25 

The problem associated with evaluating tuned models is overtuning. The more we search, the more optimistically biased the associated performance metrics from tuning become.

There is a solution to this problem, namely Nested Resampling.

The mlr3tuning package provides an AutoTuner that acts like our tuning method but is actually a Learner. The $train() method facilitates tuning of hyperparameters on the training data, using a resampling strategy (below we use 5-fold cross-validation). Then, we actually train a model with optimal hyperparameters on the whole training data.

The AutoTuner finds the best parameters and uses them for training:

grid_auto = AutoTuner$new(
  learner = knn,
  resampling = rsmp("cv", folds = 5), # we can NOT use fixed resampling here
  measure = msr("classif.ce"),
  terminator = trm("none"),
  tuner = tnr("grid_search", resolution = 18),
  search_space = search_space
)

The AutoTuner behaves just like a regular Learner. It can be used to combine the steps of hyperparameter tuning and model fitting but is especially useful for resampling and fair comparison of performance through benchmarking:

rr = resample(task, grid_auto, cv10_instance, store_models = TRUE)

We aggregate the performances of all resampling iterations:

rr$aggregate()
classif.ce 
     0.261 

Essentially, this is the performance of a “knn with optimal hyperparameters found by grid search”. Note that grid_auto is not changed since resample() creates a clone for each resampling iteration. The trained AutoTuner objects can be accessed by using

rr$learners[[1]]
<AutoTuner:classif.kknn.tuned>
* Model: list
* Parameters: list()
* Packages: kknn
* Predict Type: prob
* Feature types: logical, integer, numeric, factor, ordered
* Properties: multiclass, twoclass
rr$learners[[1]]$tuning_result
   k distance learner_param_vals  x_domain classif.ce
1: 9        2          <list[3]> <list[2]>  0.2522222

Appendix

Example: Tuning With A Larger Budget

It is always interesting to look at what could have been. The following dataset contains an optimization run result with 3600 evaluations – more than above by a factor of 100:

             k distance   kernel scale classif.ce
   1: 2.191216 2.232217 gaussian FALSE      0.312
   2: 3.549142 1.058476     rank FALSE      0.296
   3: 2.835727 2.121690  optimal  TRUE      0.251
   4: 1.118085 1.275450     rank FALSE      0.368
   5: 2.790168 2.126899  optimal FALSE      0.320
  ---                                            
3596: 3.023075 1.413180  optimal FALSE      0.306
3597: 3.243131 1.827885 gaussian  TRUE      0.255
3598: 1.628957 2.254808     rank  TRUE      0.271
3599: 3.298112 2.984946  optimal FALSE      0.301
3600: 3.855455 2.613641 gaussian FALSE      0.294
                                     uhash           timestamp batch_nr
   1: 0bac4d72-2dad-4d0f-8502-22cc27284bb2 2020-12-09 15:08:29        1
   2: fe8de768-e91a-4a89-9fe2-521083a24466 2020-12-09 15:08:29        1
   3: bbeab2d5-19aa-4382-a351-d5ece72a58a2 2020-12-09 15:08:29        1
   4: 93b5f285-66b9-4ea6-a8fc-700a11ed08a2 2020-12-09 15:08:29        1
   5: 1ab6ca63-96db-414e-9932-67f23592aab9 2020-12-09 15:08:29        1
  ---                                                                  
3596: ae870063-e324-4f69-bfea-7c973ccabb02 2020-12-09 16:05:45      100
3597: 8f2d97f6-9ead-482d-96df-dade4277054c 2020-12-09 16:05:45      100
3598: 5b1d82a7-d330-48a6-b4be-af06d6a73a4c 2020-12-09 16:05:45      100
3599: 9e77de4a-f3ab-43fc-a89b-987e53be04a4 2020-12-09 16:05:45      100
3600: 1ce1c23f-a756-482a-8b46-65abb1270fd2 2020-12-09 16:05:45      100
      x_domain_k x_domain_distance x_domain_kernel x_domain_scale
   1:          9          2.232217        gaussian          FALSE
   2:         35          1.058476            rank          FALSE
   3:         17          2.121690         optimal           TRUE
   4:          3          1.275450            rank          FALSE
   5:         16          2.126899         optimal          FALSE
  ---                                                            
3596:         21          1.413180         optimal          FALSE
3597:         26          1.827885        gaussian           TRUE
3598:          5          2.254808            rank           TRUE
3599:         27          2.984946         optimal          FALSE
3600:         47          2.613641        gaussian          FALSE

The scale effect is just as visible as before with fewer data:

ggplot(perfdata, aes(x = x_domain_k, y = classif.ce, color = scale)) +
  geom_point(size = 2, alpha = 0.3)

Now, there seems to be a visible pattern by kernel as well:

ggplot(perfdata, aes(x = x_domain_k, y = classif.ce, color = kernel)) +
  geom_point(size = 2, alpha = 0.3)

In fact, if we zoom in to (5, 40) \(\times\) (0.23, 0.28) and do decrease smoothing we see that different kernels have their optimum at different values of k:

ggplot(perfdata, aes(x = x_domain_k, y = classif.ce, color = kernel,
  group = interaction(kernel, scale))) +
  geom_point(size = 2, alpha = 0.3) + geom_smooth() +
  xlim(5, 40) + ylim(0.23, 0.28)

What about the distance parameter? If we select all results with k between 10 and 20 and plot distance and kernel we see an approximate relationship:

ggplot(perfdata[x_domain_k > 10 & x_domain_k < 20 & scale == TRUE],
  aes(x = distance, y = classif.ce, color = kernel)) +
  geom_point(size = 2) + geom_smooth()

In sum our observations are: The scale parameter is very influential, and scaling is beneficial. The distance type seems to be the least influential. Their seems to be an interaction between ‘k’ and ‘kernel’.

Citation

For attribution, please cite this work as

Binder & Pfisterer (2020, March 11). mlr3gallery: mlr3tuning Tutorial - German Credit. Retrieved from https://mlr3gallery.mlr-org.com/posts/2020-03-11-mlr3tuning-tutorial-german-credit/

BibTeX citation

@misc{binder2020mlr3tuning,
  author = {Binder, Martin and Pfisterer, Florian},
  title = {mlr3gallery: mlr3tuning Tutorial - German Credit},
  url = {https://mlr3gallery.mlr-org.com/posts/2020-03-11-mlr3tuning-tutorial-german-credit/},
  year = {2020}
}