mlr3tuning Tutorial - German Credit

mlr3tuning tuning optimization nested resampling german credit data set classification

In this use case, we continue working with the German credit dataset. We work on hyperparameter tuning and apply nested resampling.

Martin Binder , Florian Pfisterer
03-11-2020

Intro

This is the second part of a serial of tutorials. The other parts of this series can be found here:

We will continue working with the German credit dataset. In Part I, we peeked into the dataset by using and comparing some learners with their default parameters. We will now see how to:

Prerequisites

First, load the packages we are going to use:

We use the same Task as in Part I:

task = tsk("german_credit")

We also might want to use multiple cores to reduce long run times of tuning runs.

# future::plan("multiprocess") # uncomment for parallelization

Evaluation

We will evaluate all hyperparameter configurations using 10-fold CV. We use a fixed train-test split, i.e. the same splits for each evaluation. Otherwise, some evaluation could get unusually “hard” splits, which would make comparisons unfair.

set.seed(8008135)
cv10_instance = rsmp("cv", folds = 10)

# fix the train-test splits using the $instantiate() method
cv10_instance$instantiate(task)

# have a look at the test set instances per fold
cv10_instance$instance
      row_id fold
   1:      5    1
   2:     20    1
   3:     28    1
   4:     35    1
   5:     37    1
  ---            
 996:    936   10
 997:    950   10
 998:    963   10
 999:    985   10
1000:    994   10

Simple Parameter Tuning

Parameter tuning in mlr3 needs two packages:

  1. The paradox package is used for the search space definition of the hyperparameters
  2. The mlr3tuning package is used for tuning the hyperparameters

Search Space and Problem Definition

First, we need to decide what Learner we want to optimize. We will use LearnerClassifKKNN, the “kernelized” k-nearest neighbor classifier. We will use kknn as a normal kNN without weighting first (i.e., using the rectangular kernel):

knn = lrn("classif.kknn", predict_type = "prob")
knn$param_set$values$kernel = "rectangular"

As a next step, we decide what parameters we optimize over. Before that, though, we are interested in the parameter set on which we could tune:

knn$param_set
<ParamSet>
         id    class lower upper nlevels default       value
1:        k ParamInt     1   Inf     Inf       7            
2: distance ParamDbl     0   Inf     Inf       2            
3:   kernel ParamFct    NA    NA      10 optimal rectangular
4:    scale ParamLgl    NA    NA       2    TRUE            
5:  ykernel ParamUty    NA    NA     Inf                    

We first tune the k parameter (i.e. the number of nearest neighbors), between 3 to 20. Second, we tune the distance function, allowing L1 and L2 distances. To do so, we use the paradox package to define a search space (see the online vignette for a more complete introduction.

search_space = ParamSet$new(list(
  ParamInt$new("k", lower = 3, upper = 20),
  ParamInt$new("distance", lower = 1, upper = 2)
))

As a next step, we define a TuningInstanceSingleCrit that represents the problem we are trying to optimize.

instance_grid = TuningInstanceSingleCrit$new(
  task = task,
  learner = knn,
  resampling = cv10_instance,
  measure = msr("classif.ce"),
  terminator = trm("none"),
  search_space = search_space
)

After having set up a tuning instance, we can start tuning. Before that, we need a tuning strategy, though. A simple tuning method is to try all possible combinations of parameters: Grid Search. While it is very intuitive and simple, it is inefficient if the search space is large. For this simple use case, it suffices, though. We get the grid_search tuner via:

set.seed(1)
tuner_grid = tnr("grid_search", resolution = 18, batch_size = 36)

Tuning works by calling $optimize(). Note that the tuning procedure modifies our tuning instance (as usual for R6 class objects). The result can be found in the instance object. Before tuning it is empty:

instance_grid$result
NULL

Now, we tune:

tuner_grid$optimize(instance_grid)
   k distance learner_param_vals  x_domain classif.ce
1: 9        2          <list[3]> <list[2]>       0.25

The result is returned by $optimize() together with its performance. It can be also accessed with the $result slot:

instance_grid$result
   k distance learner_param_vals  x_domain classif.ce
1: 9        2          <list[3]> <list[2]>       0.25

We can also look at the Archive of evaluated configurations:

as.data.table(instance_grid$archive)
     k distance classif.ce                                uhash
 1:  3        1      0.271 926cc51d-c17d-46e2-b2cc-024208b43b08
 2:  3        2      0.273 7e7f4ae2-187a-4b28-a042-cc6887732359
 3:  4        1      0.292 9d0f3d37-6a80-4bcb-a072-b68c3d800c0d
 4:  4        2      0.279 71f1ee51-6774-418f-9fdf-b1fd42b1e3ee
 5:  5        1      0.271 b8c937df-4080-495a-8924-5df3f34b3a81
 6:  5        2      0.274 8437f634-22bf-4364-ba23-ad3b6f22e2eb
 7:  6        1      0.278 a38c0272-8721-4570-b938-6091c5a5f962
 8:  6        2      0.273 c14db0fe-c4ef-45ce-b4bc-52e06ca8d783
 9:  7        1      0.257 26b149e3-2b32-49b5-868b-1f89c057c2bb
10:  7        2      0.258 5e77e8ab-5e1f-4c4b-959d-1f4580efe19c
11:  8        1      0.264 1ab110ba-652e-4373-801f-8c0fe51c0ff1
12:  8        2      0.256 b6f54f30-985b-4e5b-bd03-4aecb7747c0e
13:  9        1      0.251 90cd08aa-83c6-4125-bf3d-a4cfec5d6356
14:  9        2      0.250 9c4fec64-33ba-4ebf-b43c-de315ae1ba8a
15: 10        1      0.261 67186cbe-0b8f-4f80-b8c8-4760c254f93d
16: 10        2      0.250 59bc9b8e-932a-4374-b62a-565165105ca6
17: 11        1      0.256 5b1fb985-8df2-4809-826b-9ef091828ff5
18: 11        2      0.254 8a141832-ad87-419c-98ab-81579a44d113
19: 12        1      0.260 c1410502-b57a-4cd7-9dc4-f01a483839e2
20: 12        2      0.259 157db470-c8fe-4b40-b472-a912546687e1
21: 13        1      0.268 2d152ccb-9acb-402b-862c-f7e29a424234
22: 13        2      0.258 525b5f20-ba10-497f-8d18-4a07b0ebf99f
23: 14        1      0.265 df32ef49-151c-431b-83d0-08ed7ff1db71
24: 14        2      0.263 74a142aa-62c1-442a-9f24-4b6a7f18eb45
25: 15        1      0.268 dbf3c2f4-3ac6-493e-bd50-de00b8982dbf
26: 15        2      0.264 d26b6edd-797d-4210-8ed6-20fa4507274e
27: 16        1      0.267 0df61b13-459e-4318-9326-fceff4410856
28: 16        2      0.262 36eb7e54-d4ea-416f-85f2-c226670c0839
29: 17        1      0.264 701358f1-e224-4993-8e1b-8c6bce13466d
30: 17        2      0.267 ccf5f013-b2ef-452f-a100-3915188118e0
31: 18        1      0.273 96631722-9739-4be3-91b9-b6826482cfe3
32: 18        2      0.271 282aaf2a-06a3-48d1-a451-0175486d65ab
33: 19        1      0.269 3ff59eab-8f45-455d-8198-456462242043
34: 19        2      0.269 893e7f64-7645-471b-8203-a219b1a143f5
35: 20        1      0.268 61b2c9d2-324b-489b-95eb-f49185feb86c
36: 20        2      0.269 6fd3a183-922f-4245-81cd-572679e72de0
     k distance classif.ce                                uhash
              timestamp batch_nr x_domain_k x_domain_distance
 1: 2021-04-17 04:45:13        1          3                 1
 2: 2021-04-17 04:45:13        1          3                 2
 3: 2021-04-17 04:45:13        1          4                 1
 4: 2021-04-17 04:45:13        1          4                 2
 5: 2021-04-17 04:45:13        1          5                 1
 6: 2021-04-17 04:45:13        1          5                 2
 7: 2021-04-17 04:45:13        1          6                 1
 8: 2021-04-17 04:45:13        1          6                 2
 9: 2021-04-17 04:45:13        1          7                 1
10: 2021-04-17 04:45:13        1          7                 2
11: 2021-04-17 04:45:13        1          8                 1
12: 2021-04-17 04:45:13        1          8                 2
13: 2021-04-17 04:45:13        1          9                 1
14: 2021-04-17 04:45:13        1          9                 2
15: 2021-04-17 04:45:13        1         10                 1
16: 2021-04-17 04:45:13        1         10                 2
17: 2021-04-17 04:45:13        1         11                 1
18: 2021-04-17 04:45:13        1         11                 2
19: 2021-04-17 04:45:13        1         12                 1
20: 2021-04-17 04:45:13        1         12                 2
21: 2021-04-17 04:45:13        1         13                 1
22: 2021-04-17 04:45:13        1         13                 2
23: 2021-04-17 04:45:13        1         14                 1
24: 2021-04-17 04:45:13        1         14                 2
25: 2021-04-17 04:45:13        1         15                 1
26: 2021-04-17 04:45:13        1         15                 2
27: 2021-04-17 04:45:13        1         16                 1
28: 2021-04-17 04:45:13        1         16                 2
29: 2021-04-17 04:45:13        1         17                 1
30: 2021-04-17 04:45:13        1         17                 2
31: 2021-04-17 04:45:13        1         18                 1
32: 2021-04-17 04:45:13        1         18                 2
33: 2021-04-17 04:45:13        1         19                 1
34: 2021-04-17 04:45:13        1         19                 2
35: 2021-04-17 04:45:13        1         20                 1
36: 2021-04-17 04:45:13        1         20                 2
              timestamp batch_nr x_domain_k x_domain_distance

We plot the performances depending on the sampled k and distance:

ggplot(as.data.table(instance_grid$archive), aes(x = k, y = classif.ce, color = as.factor(distance))) +
  geom_line() + geom_point(size = 3)

On average, the Euclidean distance (distance = 2) seems to work better. However, there is much randomness introduced by the resampling instance. So you, the reader, may see a different result, when you run the experiment yourself and set a different random seed. For k, we find that values between 7 and 13 perform well.

Random Search and Transformation

Let’s have a look at a larger search space. For example, we could tune all available parameters and limit k to large values (50). We also now tune the distance param continuously from 1 to 3 as a double and tune distance kernel and whether we scale the features.

We may find two problems when doing so:

First, the resulting difference in performance between k = 3 and k = 4 is probably larger than the difference between k = 49 and k = 50. While 4 is 33% larger than 3, 50 is only 2 percent larger than 49. To account for this we will use a transformation function for k and optimize in log-space. We define the range for k from log(3) to log(50) and exponentiate in the transformation. Now, as k has become a double instead of an int (in the search space, before transformation), we round it in the trafo.

large_searchspace = ParamSet$new(list(
  ParamDbl$new("k", lower = log(3), upper = log(50)),
  ParamDbl$new("distance", lower = 1, upper = 3),
  ParamFct$new("kernel", c("rectangular", "gaussian", "rank", "optimal")),
  ParamLgl$new("scale")
))

large_searchspace$trafo = function(x, param_set) {
  x$k = round(exp(x$k))
  x
}

The second problem is that grid search may (and often will) take a long time. For instance, trying out three different values for k, distance, kernel, and the two values for scale will take 54 evaluations. Because of this, we use a different search algorithm, namely the Random Search. We need to specify in the tuning instance a termination criterion. The criterion tells the search algorithm when to stop. Here, we will terminate after 36 evaluations:

tuner_random = tnr("random_search", batch_size = 36)

instance_random = TuningInstanceSingleCrit$new(
  task = task,
  learner = knn,
  resampling = cv10_instance,
  measure = msr("classif.ce"),
  terminator = trm("evals", n_evals = 36),
  search_space = large_searchspace,
)
tuner_random$optimize(instance_random)
          k distance  kernel scale learner_param_vals  x_domain classif.ce
1: 2.444957  1.54052 optimal  TRUE          <list[4]> <list[4]>      0.249

Like before, we can review the Archive. It includes the points before and after the transformation. The archive includes a column for each parameter the Tuner sampled on the search space (values before the transformation) and additional columns with prefix x_domain_* that refer to the parameters used by the learner (values after the transformation):

as.data.table(instance_random$archive)
           k distance      kernel scale classif.ce
 1: 2.654531 2.921236 rectangular FALSE      0.314
 2: 2.588931 1.869319        rank  TRUE      0.254
 3: 3.319396 2.425029 rectangular  TRUE      0.272
 4: 1.164253 1.799989    gaussian FALSE      0.364
 5: 2.441256 1.650704        rank  TRUE      0.253
 6: 3.158912 2.514174     optimal FALSE      0.305
 7: 3.047551 1.405385    gaussian  TRUE      0.257
 8: 2.442352 2.422242    gaussian  TRUE      0.270
 9: 3.521548 1.243384 rectangular  TRUE      0.271
10: 2.331159 1.490977     optimal  TRUE      0.252
11: 1.787328 1.286609    gaussian FALSE      0.345
12: 1.297461 1.479259        rank  TRUE      0.274
13: 1.378451 1.117869 rectangular FALSE      0.357
14: 1.988414 2.284577 rectangular FALSE      0.313
15: 2.557743 2.752538        rank  TRUE      0.267
16: 2.961104 2.557829        rank  TRUE      0.264
17: 2.243193 2.594618 rectangular  TRUE      0.256
18: 3.666907 1.910549 rectangular FALSE      0.292
19: 1.924639 1.820168        rank  TRUE      0.256
20: 2.390153 2.621740     optimal FALSE      0.339
21: 2.033775 2.209867        rank FALSE      0.332
22: 2.929778 2.309448        rank FALSE      0.302
23: 1.824519 1.706395        rank  TRUE      0.261
24: 2.444957 1.540520     optimal  TRUE      0.249
25: 3.254559 2.985368        rank FALSE      0.302
26: 1.335633 2.266987        rank FALSE      0.361
27: 3.561251 1.426416        rank FALSE      0.296
28: 2.052564 1.258745 rectangular FALSE      0.324
29: 3.460303 1.956236    gaussian FALSE      0.296
30: 2.073975 2.848149        rank  TRUE      0.269
31: 2.037658 2.197522    gaussian  TRUE      0.267
32: 2.438784 2.952341 rectangular FALSE      0.308
33: 3.608733 2.463585        rank FALSE      0.294
34: 3.530354 1.713454 rectangular FALSE      0.297
35: 2.195813 1.862947     optimal  TRUE      0.257
36: 3.285535 1.296423        rank FALSE      0.300
           k distance      kernel scale classif.ce
                                   uhash           timestamp batch_nr
 1: e7b2b2e8-40b1-40a8-a93a-67035ce2a4f2 2021-04-17 04:45:47        1
 2: 813a392a-8632-48ad-b001-30d3bf93e4f2 2021-04-17 04:45:47        1
 3: f9ef96ac-8187-4470-9e18-5d9eeb81ae71 2021-04-17 04:45:47        1
 4: 98549897-9b61-462c-8963-c4f2b1f1ad67 2021-04-17 04:45:47        1
 5: b8e5659b-6bc2-4704-8a74-f9cbdd88072f 2021-04-17 04:45:47        1
 6: 01151a87-cb90-46c2-b6e9-768f5d67a19f 2021-04-17 04:45:47        1
 7: c4ca5fe3-4a4e-4108-ab9e-b7e294e6121f 2021-04-17 04:45:47        1
 8: 1023df8b-4d4c-41f1-97a4-02b235dc6eb2 2021-04-17 04:45:47        1
 9: 37f5e32c-bf05-476f-a062-33fe8a9300f0 2021-04-17 04:45:47        1
10: a8cb6ac3-3302-4b98-b140-60c215cc4ee4 2021-04-17 04:45:47        1
11: 1e55f92e-8037-45b7-a970-00b5ccb3c364 2021-04-17 04:45:47        1
12: 9d7bc9f9-038b-405d-82ea-17629d8c32a1 2021-04-17 04:45:47        1
13: 45adfa58-79cc-41d0-8c6e-63e3a843ad87 2021-04-17 04:45:47        1
14: f90e0239-bbee-4f46-a3b9-f9e2fb5d6503 2021-04-17 04:45:47        1
15: 3092a9fa-72fb-43ac-982e-8b0167ae112e 2021-04-17 04:45:47        1
16: fa666ac3-1a36-450d-a474-5d807074d0ab 2021-04-17 04:45:47        1
17: 686b4c3b-14a2-4b53-8dc8-f548f52f25b5 2021-04-17 04:45:47        1
18: be940732-f3ac-4e9b-988f-e711ca3e6c30 2021-04-17 04:45:47        1
19: 2ede7571-e986-45bf-b113-dd1cc6d57acf 2021-04-17 04:45:47        1
20: 5f9936ab-fe5f-4a5d-8832-cf9fb920b689 2021-04-17 04:45:47        1
21: 9b215b6e-2207-429d-8931-e0f132ec3d06 2021-04-17 04:45:47        1
22: 99edba67-62d4-47a3-aa0e-847737f93fa4 2021-04-17 04:45:47        1
23: 518b2318-03a4-4ccd-be11-9457cdce479a 2021-04-17 04:45:47        1
24: 035e9783-85a0-4890-b0bb-8f852db2df7d 2021-04-17 04:45:47        1
25: 5384c132-862e-418c-ae4e-13a9c36661a0 2021-04-17 04:45:47        1
26: 1c536195-2e6c-4286-8f0b-a0ddf0e29fa0 2021-04-17 04:45:47        1
27: f08f4200-f63c-4e1b-beff-36242f110df8 2021-04-17 04:45:47        1
28: caf7d4d8-32ae-4468-8a73-3ba5c1032dd5 2021-04-17 04:45:47        1
29: 8e428778-9b96-4d5c-8230-2b39e1b403e5 2021-04-17 04:45:47        1
30: 13044b3d-f1b8-4444-8f15-593dab83ab46 2021-04-17 04:45:47        1
31: b4e9fecc-de9b-41d2-a0ba-f00e0f0dae11 2021-04-17 04:45:47        1
32: e8edeac9-7d65-4c94-bab8-f4c3ae4e3e9d 2021-04-17 04:45:47        1
33: 744e8bb0-d85b-40a1-a779-5cf80a009c67 2021-04-17 04:45:47        1
34: b745a4e7-e2d9-48bd-a5e7-f91356cf560e 2021-04-17 04:45:47        1
35: d8129adb-c3bd-422e-9454-bf6bf12f907d 2021-04-17 04:45:47        1
36: bb551de2-7e37-48d8-9ab1-746af1044e16 2021-04-17 04:45:47        1
                                   uhash           timestamp batch_nr
    x_domain_k x_domain_distance x_domain_kernel x_domain_scale
 1:         14          2.921236     rectangular          FALSE
 2:         13          1.869319            rank           TRUE
 3:         28          2.425029     rectangular           TRUE
 4:          3          1.799989        gaussian          FALSE
 5:         11          1.650704            rank           TRUE
 6:         24          2.514174         optimal          FALSE
 7:         21          1.405385        gaussian           TRUE
 8:         12          2.422242        gaussian           TRUE
 9:         34          1.243384     rectangular           TRUE
10:         10          1.490977         optimal           TRUE
11:          6          1.286609        gaussian          FALSE
12:          4          1.479259            rank           TRUE
13:          4          1.117869     rectangular          FALSE
14:          7          2.284577     rectangular          FALSE
15:         13          2.752538            rank           TRUE
16:         19          2.557829            rank           TRUE
17:          9          2.594618     rectangular           TRUE
18:         39          1.910549     rectangular          FALSE
19:          7          1.820168            rank           TRUE
20:         11          2.621740         optimal          FALSE
21:          8          2.209867            rank          FALSE
22:         19          2.309448            rank          FALSE
23:          6          1.706395            rank           TRUE
24:         12          1.540520         optimal           TRUE
25:         26          2.985368            rank          FALSE
26:          4          2.266987            rank          FALSE
27:         35          1.426416            rank          FALSE
28:          8          1.258745     rectangular          FALSE
29:         32          1.956236        gaussian          FALSE
30:          8          2.848149            rank           TRUE
31:          8          2.197522        gaussian           TRUE
32:         11          2.952341     rectangular          FALSE
33:         37          2.463585            rank          FALSE
34:         34          1.713454     rectangular          FALSE
35:          9          1.862947         optimal           TRUE
36:         27          1.296423            rank          FALSE
    x_domain_k x_domain_distance x_domain_kernel x_domain_scale

Let’s now investigate the performance by parameters. This is especially easy using visualization:

ggplot(as.data.table(instance_random$archive),
  aes(x = x_domain_k, y = classif.ce, color = x_domain_scale)) +
  geom_point(size = 3) + geom_line()

The previous plot suggests that scale has a strong influence on performance. For the kernel, there does not seem to be a strong influence:

ggplot(as.data.table(instance_random$archive),
  aes(x = x_domain_k, y = classif.ce, color = x_domain_kernel)) +
  geom_point(size = 3) + geom_line()

Nested Resampling

Having determined tuned configurations that seem to work well, we want to find out which performance we can expect from them. However, this may require more than this naive approach:

instance_random$result_y
classif.ce 
     0.249 
instance_grid$result_y
classif.ce 
      0.25 

The problem associated with evaluating tuned models is overtuning. The more we search, the more optimistically biased the associated performance metrics from tuning become.

There is a solution to this problem, namely Nested Resampling.

The mlr3tuning package provides an AutoTuner that acts like our tuning method but is actually a Learner. The $train() method facilitates tuning of hyperparameters on the training data, using a resampling strategy (below we use 5-fold cross-validation). Then, we actually train a model with optimal hyperparameters on the whole training data.

The AutoTuner finds the best parameters and uses them for training:

grid_auto = AutoTuner$new(
  learner = knn,
  resampling = rsmp("cv", folds = 5), # we can NOT use fixed resampling here
  measure = msr("classif.ce"),
  terminator = trm("none"),
  tuner = tnr("grid_search", resolution = 18),
  search_space = search_space
)

The AutoTuner behaves just like a regular Learner. It can be used to combine the steps of hyperparameter tuning and model fitting but is especially useful for resampling and fair comparison of performance through benchmarking:

rr = resample(task, grid_auto, cv10_instance, store_models = TRUE)

We aggregate the performances of all resampling iterations:

rr$aggregate()
classif.ce 
     0.256 

Essentially, this is the performance of a “knn with optimal hyperparameters found by grid search”. Note that grid_auto is not changed since resample() creates a clone for each resampling iteration. The trained AutoTuner objects can be accessed by using

rr$learners[[1]]
<AutoTuner:classif.kknn.tuned>
* Model: list
* Parameters: list()
* Packages: kknn
* Predict Type: prob
* Feature types: logical, integer, numeric, factor, ordered
* Properties: multiclass, twoclass
rr$learners[[1]]$tuning_result
   k distance learner_param_vals  x_domain classif.ce
1: 9        2          <list[3]> <list[2]>       0.26

Appendix

Example: Tuning With A Larger Budget

It is always interesting to look at what could have been. The following dataset contains an optimization run result with 3600 evaluations – more than above by a factor of 100:

             k distance   kernel scale classif.ce
   1: 2.191216 2.232217 gaussian FALSE      0.312
   2: 3.549142 1.058476     rank FALSE      0.296
   3: 2.835727 2.121690  optimal  TRUE      0.251
   4: 1.118085 1.275450     rank FALSE      0.368
   5: 2.790168 2.126899  optimal FALSE      0.320
  ---                                            
3596: 3.023075 1.413180  optimal FALSE      0.306
3597: 3.243131 1.827885 gaussian  TRUE      0.255
3598: 1.628957 2.254808     rank  TRUE      0.271
3599: 3.298112 2.984946  optimal FALSE      0.301
3600: 3.855455 2.613641 gaussian FALSE      0.294
                                     uhash           timestamp batch_nr
   1: 0bac4d72-2dad-4d0f-8502-22cc27284bb2 2020-12-09 15:08:29        1
   2: fe8de768-e91a-4a89-9fe2-521083a24466 2020-12-09 15:08:29        1
   3: bbeab2d5-19aa-4382-a351-d5ece72a58a2 2020-12-09 15:08:29        1
   4: 93b5f285-66b9-4ea6-a8fc-700a11ed08a2 2020-12-09 15:08:29        1
   5: 1ab6ca63-96db-414e-9932-67f23592aab9 2020-12-09 15:08:29        1
  ---                                                                  
3596: ae870063-e324-4f69-bfea-7c973ccabb02 2020-12-09 16:05:45      100
3597: 8f2d97f6-9ead-482d-96df-dade4277054c 2020-12-09 16:05:45      100
3598: 5b1d82a7-d330-48a6-b4be-af06d6a73a4c 2020-12-09 16:05:45      100
3599: 9e77de4a-f3ab-43fc-a89b-987e53be04a4 2020-12-09 16:05:45      100
3600: 1ce1c23f-a756-482a-8b46-65abb1270fd2 2020-12-09 16:05:45      100
      x_domain_k x_domain_distance x_domain_kernel x_domain_scale
   1:          9          2.232217        gaussian          FALSE
   2:         35          1.058476            rank          FALSE
   3:         17          2.121690         optimal           TRUE
   4:          3          1.275450            rank          FALSE
   5:         16          2.126899         optimal          FALSE
  ---                                                            
3596:         21          1.413180         optimal          FALSE
3597:         26          1.827885        gaussian           TRUE
3598:          5          2.254808            rank           TRUE
3599:         27          2.984946         optimal          FALSE
3600:         47          2.613641        gaussian          FALSE

The scale effect is just as visible as before with fewer data:

ggplot(perfdata, aes(x = x_domain_k, y = classif.ce, color = scale)) +
  geom_point(size = 2, alpha = 0.3)

Now, there seems to be a visible pattern by kernel as well:

ggplot(perfdata, aes(x = x_domain_k, y = classif.ce, color = kernel)) +
  geom_point(size = 2, alpha = 0.3)

In fact, if we zoom in to (5, 40) \(\times\) (0.23, 0.28) and do decrease smoothing we see that different kernels have their optimum at different values of k:

ggplot(perfdata, aes(x = x_domain_k, y = classif.ce, color = kernel,
  group = interaction(kernel, scale))) +
  geom_point(size = 2, alpha = 0.3) + geom_smooth() +
  xlim(5, 40) + ylim(0.23, 0.28)

What about the distance parameter? If we select all results with k between 10 and 20 and plot distance and kernel we see an approximate relationship:

ggplot(perfdata[x_domain_k > 10 & x_domain_k < 20 & scale == TRUE],
  aes(x = distance, y = classif.ce, color = kernel)) +
  geom_point(size = 2) + geom_smooth()

In sum our observations are: The scale parameter is very influential, and scaling is beneficial. The distance type seems to be the least influential. Their seems to be an interaction between ‘k’ and ‘kernel’.

Citation

For attribution, please cite this work as

Binder & Pfisterer (2020, March 11). mlr3gallery: mlr3tuning Tutorial - German Credit. Retrieved from https://mlr3gallery.mlr-org.com/posts/2020-03-11-mlr3tuning-tutorial-german-credit/

BibTeX citation

@misc{binder2020mlr3tuning,
  author = {Binder, Martin and Pfisterer, Florian},
  title = {mlr3gallery: mlr3tuning Tutorial - German Credit},
  url = {https://mlr3gallery.mlr-org.com/posts/2020-03-11-mlr3tuning-tutorial-german-credit/},
  year = {2020}
}