mlr3 Basics - German Credit

visualization classification feature importance german credit data set classification

In this use case, we teach the basics of mlr3 by training different models on the German credit dataset.

Martin Binder , Florian Pfisterer , Michel Lang
03-11-2020

Intro

This is the first part in a serial of tutorials. The other parts of this series can be found here:

We will walk through this tutorial interactively. The text is kept short to be followed in real time.

Prerequisites

Ensure all packages used in this tutorial are installed. This includes packages from the mlr3 family, as well as other packages for data handling, cleaning and visualization which we are going to use (data.table, ggplot2, rchallenge, and skimr).

Then, load the main packages we are going to use:

Machine Learning Use Case: German Credit Data

The German credit data was originally donated in 1994 by Prof. Dr. Hans Hoffman of the University of Hamburg. A description can be found at the UCI repository. The goal is to classify people by their credit risk (good or bad) using 20 personal, demographic and financial features:

Feature Name Description
age age in years
amount amount asked by applicant
credit_history past credit history of applicant at this bank
duration duration of the credit in months
employment_duration present employment since
foreign_worker is applicant foreign worker?
housing type of apartment rented, owned, for free / no payment
installment_rate installment rate in percentage of disposable income
job current job information
number_credits number of existing credits at this bank
other_debtors other debtors/guarantors present?
other_installment_plans other installment plans the applicant is paying
people_liable number of people being liable to provide maintenance
personal_status_sex combination of sex and personal status of applicant
present_residence present residence since
property properties that applicant has
purpose reason customer is applying for a loan
savings savings accounts/bonds at this bank
status status/balance of checking account at this bank
telephone is there any telephone registered for this customer?

Importing the Data

The dataset we are going to use is a transformed version of this German credit dataset, as provided by the rchallenge package (this transformed dataset was proposed by Ulrike Grömping, with factors instead of dummy variables and corrected features):

data("german", package = "rchallenge")

First, we’ll do a thorough investigation of the dataset.

Exploring the Data

We can get a quick overview of our dataset using R’s summary function:

dim(german)
[1] 1000   21
str(german)
'data.frame':   1000 obs. of  21 variables:
 $ status                 : Factor w/ 4 levels "no checking account",..: 1 1 2 1 1 1 1 1 4 2 ...
 $ duration               : int  18 9 12 12 12 10 8 6 18 24 ...
 $ credit_history         : Factor w/ 5 levels "delay in paying off in the past",..: 5 5 3 5 5 5 5 5 5 3 ...
 $ purpose                : Factor w/ 11 levels "others","car (new)",..: 3 1 10 1 1 1 1 1 4 4 ...
 $ amount                 : int  1049 2799 841 2122 2171 2241 3398 1361 1098 3758 ...
 $ savings                : Factor w/ 5 levels "unknown/no savings account",..: 1 1 2 1 1 1 1 1 1 3 ...
 $ employment_duration    : Factor w/ 5 levels "unemployed","< 1 yr",..: 2 3 4 3 3 2 4 2 1 1 ...
 $ installment_rate       : Ord.factor w/ 4 levels ">= 35"<"25 <= ... < 35"<..: 4 2 2 3 4 1 1 2 4 1 ...
 $ personal_status_sex    : Factor w/ 4 levels "male : divorced/separated",..: 2 3 2 3 3 3 3 3 2 2 ...
 $ other_debtors          : Factor w/ 3 levels "none","co-applicant",..: 1 1 1 1 1 1 1 1 1 1 ...
 $ present_residence      : Ord.factor w/ 4 levels "< 1 yr"<"1 <= ... < 4 yrs"<..: 4 2 4 2 4 3 4 4 4 4 ...
 $ property               : Factor w/ 4 levels "unknown / no property",..: 2 1 1 1 2 1 1 1 3 4 ...
 $ age                    : int  21 36 23 39 38 48 39 40 65 23 ...
 $ other_installment_plans: Factor w/ 3 levels "bank","stores",..: 3 3 3 3 1 3 3 3 3 3 ...
 $ housing                : Factor w/ 3 levels "for free","rent",..: 1 1 1 1 2 1 2 2 2 1 ...
 $ number_credits         : Ord.factor w/ 4 levels "1"<"2-3"<"4-5"<..: 1 2 1 2 2 2 2 1 2 1 ...
 $ job                    : Factor w/ 4 levels "unemployed/unskilled - non-resident",..: 3 3 2 2 2 2 2 2 1 1 ...
 $ people_liable          : Factor w/ 2 levels "3 or more","0 to 2": 2 1 2 1 2 1 2 1 2 2 ...
 $ telephone              : Factor w/ 2 levels "no","yes (under customer name)": 1 1 1 1 1 1 1 1 1 1 ...
 $ foreign_worker         : Factor w/ 2 levels "yes","no": 2 2 2 1 1 1 1 1 2 2 ...
 $ credit_risk            : Factor w/ 2 levels "bad","good": 2 2 2 2 2 2 2 2 2 2 ...

Our dataset has 1000 observations and 21 columns. The variable we want to predict is credit_risk (either good or bad), i.e., we aim to classify people by their credit risk.

We also recommend the skimr package as it creates very well readable and understandable overviews:

skimr::skim(german)
Table 1: Data summary
Name german
Number of rows 1000
Number of columns 21
_______________________
Column type frequency:
factor 18
numeric 3
________________________
Group variables None

Variable type: factor

skim_variable n_missing complete_rate ordered n_unique top_counts
status 0 1 FALSE 4 …: 394, no : 274, …: 269, 0<=: 63
credit_history 0 1 FALSE 5 no : 530, all: 293, exi: 88, cri: 49
purpose 0 1 FALSE 10 fur: 280, oth: 234, car: 181, car: 103
savings 0 1 FALSE 5 unk: 603, …: 183, …: 103, 100: 63
employment_duration 0 1 FALSE 5 1 <: 339, >= : 253, 4 <: 174, < 1: 172
installment_rate 0 1 TRUE 4 < 2: 476, 25 : 231, 20 : 157, >= : 136
personal_status_sex 0 1 FALSE 4 mal: 548, fem: 310, fem: 92, mal: 50
other_debtors 0 1 FALSE 3 non: 907, gua: 52, co-: 41
present_residence 0 1 TRUE 4 >= : 413, 1 <: 308, 4 <: 149, < 1: 130
property 0 1 FALSE 4 bui: 332, unk: 282, car: 232, rea: 154
other_installment_plans 0 1 FALSE 3 non: 814, ban: 139, sto: 47
housing 0 1 FALSE 3 ren: 714, for: 179, own: 107
number_credits 0 1 TRUE 4 1: 633, 2-3: 333, 4-5: 28, >= : 6
job 0 1 FALSE 4 ski: 630, uns: 200, man: 148, une: 22
people_liable 0 1 FALSE 2 0 t: 845, 3 o: 155
telephone 0 1 FALSE 2 no: 596, yes: 404
foreign_worker 0 1 FALSE 2 no: 963, yes: 37
credit_risk 0 1 FALSE 2 goo: 700, bad: 300

Variable type: numeric

skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
duration 0 1 20.90 12.06 4 12.0 18.0 24.00 72 ▇▇▂▁▁
amount 0 1 3271.25 2822.75 250 1365.5 2319.5 3972.25 18424 ▇▂▁▁▁
age 0 1 35.54 11.35 19 27.0 33.0 42.00 75 ▇▆▃▁▁

During an exploratory analysis meaningful discoveries could be:

An explanatory analysis is crucial to get a feeling for your data. On the other hand the data can be validated this way. Non-plausible data can be investigated or outliers can be removed.

After feeling confident with the data, we want to do modeling now.

Modeling

Considering how we are going to tackle the problem of classifying the credit risk relates closely to what mlr3 entities we will use.

The typical questions that arise when building a machine learning workflow are:

More systematically in mlr3 they can be expressed via five components:

  1. The Task definition.
  2. The Learner definition.
  3. The training.
  4. The prediction.
  5. The evaluation via one or multiple Measures.

Task Definition

First, we are interested in the target which we want to model. Most supervised machine learning problems are regression or classification problems. However, note that other problems include unsupervised learning or time-to-event data (covered in mlr3proba).

Within mlr3, to distinguish between these problems, we define Tasks. If we want to solve a classification problem, we define a classification task – TaskClassif. For a regression problem, we define a regression task – TaskRegr.

In our case it is clearly our objective to model or predict the binary factor variable credit_risk. Thus, we define a TaskClassif:

task = TaskClassif$new("GermanCredit", german, target = "credit_risk")

Note that the German credit data is also given as an example task which ships with the mlr3 package. Thus, you actually don’t need to construct it yourself, just call tsk("german_credit") to retrieve the object from the dictionary mlr_tasks.

Learner Definition

After having decided what should be modeled, we need to decide on how. This means we need to decide which learning algorithms, or Learners are appropriate. Using prior knowledge (e.g. knowing that it is a classification task or assuming that the classes are linearly separable) one ends up with one or more suitable Learners.

Many learners can be obtained via the mlr3learners package. Additionally, many learners are provided via the mlr3extralearners package, from GitHub. These two resources combined account for a large fraction of standard learning algorithms. As mlr3 usually only wraps learners from packages, it is even easy to create a formal Learner by yourself. You may find the section about extending mlr3 in the mlr3book very helpful. If you happen to write your own Learner in mlr3, we would be happy if you share it with the mlr3 community.

All available Learners (i.e. all which you have installed from mlr3, mlr3learners, mlr3extralearners, or self-written ones) are registered in the dictionary mlr_learners:

mlr_learners
<DictionaryLearner> with 29 stored values
Keys: classif.cv_glmnet, classif.debug, classif.featureless, classif.glmnet, classif.kknn, classif.lda,
  classif.log_reg, classif.multinom, classif.naive_bayes, classif.nnet, classif.qda, classif.ranger,
  classif.rpart, classif.svm, classif.xgboost, regr.cv_glmnet, regr.featureless, regr.glmnet, regr.kknn,
  regr.km, regr.lm, regr.ranger, regr.rpart, regr.svm, regr.xgboost, surv.cv_glmnet, surv.glmnet,
  surv.ranger, surv.xgboost

For our problem, a suitable learner could be one of the following: Logistic regression, CART, random forest (or many more).

A learner can be initialized with the lrn() function and the name of the learner, e.g., lrn("classif.xxx"). Use ?mlr_learners_xxx to open the help page of a learner named xxx.

For example, a logistic regression can be initialized in the following manner (logistic regression uses R’s glm() function and is provided by the mlr3learners package):

library("mlr3learners")
learner_logreg = lrn("classif.log_reg")
print(learner_logreg)
<LearnerClassifLogReg:classif.log_reg>
* Model: -
* Parameters: list()
* Packages: stats
* Predict Type: response
* Feature types: logical, integer, numeric, character, factor, ordered
* Properties: twoclass, weights

Training

Training is the procedure, where a model is fitted on the (training) data.

Logistic Regression

We start with the example of the logistic regression. However, you will immediately see that the procedure generalizes to any learner very easily.

An initialized learner can be trained on data using $train():

learner_logreg$train(task)

Typically, in machine learning, one does not use the full data which is available but a subset, the so-called training data.

To efficiently perform a split of the data one could do the following:

train_set = sample(task$row_ids, 0.8 * task$nrow)
test_set = setdiff(task$row_ids, train_set)

80 percent of the data is used for training. The remaining 20 percent are used for evaluation at a subsequent later point in time. train_set is an integer vector referring to the selected rows of the original dataset:

head(train_set)
[1] 192 887  86 784 801 673

In mlr3 the training with a subset of the data can be declared by the additional argument row_ids = train_set:

learner_logreg$train(task, row_ids = train_set)

The fitted model can be accessed via:

learner_logreg$model

Call:  stats::glm(formula = task$formula(), family = "binomial", data = task$data(), 
    model = FALSE)

Coefficients:
                                              (Intercept)                                                        age  
                                               -1.161e+00                                                  1.387e-02  
                                                   amount     credit_historycritical account/other credits elsewhere  
                                               -8.189e-05                                                 -3.385e-01  
credit_historyno credits taken/all credits paid back duly     credit_historyexisting credits paid back duly till now  
                                                2.385e-01                                                  6.930e-01  
    credit_historyall credits at this bank paid back duly                                                   duration  
                                                1.302e+00                                                 -3.422e-02  
                                employment_duration< 1 yr                        employment_duration1 <= ... < 4 yrs  
                                                8.312e-03                                                  2.089e-01  
                      employment_duration4 <= ... < 7 yrs                                employment_duration>= 7 yrs  
                                                9.245e-01                                                  1.293e-01  
                                         foreign_workerno                                                housingrent  
                                               -1.164e+00                                                  6.107e-01  
                                               housingown                                         installment_rate.L  
                                                5.910e-01                                                 -8.743e-01  
                                       installment_rate.Q                                         installment_rate.C  
                                                5.084e-02                                                 -5.452e-02  
                                  jobunskilled - resident                               jobskilled employee/official  
                                               -2.163e-01                                                 -1.860e-01  
            jobmanager/self-empl./highly qualif. employee                                           number_credits.L  
                                               -3.160e-01                                                 -4.348e-01  
                                         number_credits.Q                                           number_credits.C  
                                                5.699e-01                                                  4.899e-01  
                                other_debtorsco-applicant                                     other_debtorsguarantor  
                                               -3.478e-01                                                  9.728e-01  
                            other_installment_plansstores                                other_installment_plansnone  
                                                2.088e-01                                                  4.289e-01  
                                      people_liable0 to 2    personal_status_sexfemale : non-single or male : single  
                                                4.150e-01                                                  3.585e-01  
                personal_status_sexmale : married/widowed                         personal_status_sexfemale : single  
                                                9.844e-01                                                  4.560e-01  
                                      present_residence.L                                        present_residence.Q  
                                               -3.701e-01                                                  5.865e-01  
                                      present_residence.C                                       propertycar or other  
                                               -1.482e-01                                                 -5.933e-01  
        propertybuilding soc. savings agr./life insurance                                        propertyreal estate  
                                               -1.004e-01                                                 -4.304e-01  
                                         purposecar (new)                                          purposecar (used)  
                                                1.388e+00                                                  4.804e-01  
                               purposefurniture/equipment                                    purposeradio/television  
                                                8.181e-01                                                  3.062e-01  
                               purposedomestic appliances                                             purposerepairs  
                                                2.865e-01                                                 -1.949e-01  
                                          purposevacation                                          purposeretraining  
                                                1.930e+00                                                  7.339e-01  
                                          purposebusiness                                       savings... <  100 DM  
                                                9.512e-01                                                  5.394e-01  
                              savings100 <= ... <  500 DM                                savings500 <= ... < 1000 DM  
                                                1.901e-01                                                  1.613e+00  
                                    savings... >= 1000 DM                                           status... < 0 DM  
                                                1.085e+00                                                 -8.364e-02  
                                   status0<= ... < 200 DM           status... >= 200 DM / salary for at least 1 year  
                                                6.087e-01                                                  1.530e+00  
                       telephoneyes (under customer name)  
                                                2.401e-01  

Degrees of Freedom: 799 Total (i.e. Null);  745 Residual
Null Deviance:      985.7 
Residual Deviance: 722.5    AIC: 832.5

The stored object is a normal glm object and all its S3 methods work as expected:

class(learner_logreg$model)
[1] "glm" "lm" 
summary(learner_logreg$model)

Call:
stats::glm(formula = task$formula(), family = "binomial", data = task$data(), 
    model = FALSE)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.5339  -0.7076   0.3813   0.7021   2.2021  

Coefficients:
                                                            Estimate Std. Error z value Pr(>|z|)    
(Intercept)                                               -1.161e+00  1.265e+00  -0.918 0.358879    
age                                                        1.387e-02  1.047e-02   1.325 0.185250    
amount                                                    -8.189e-05  5.063e-05  -1.618 0.105745    
credit_historycritical account/other credits elsewhere    -3.385e-01  6.276e-01  -0.539 0.589622    
credit_historyno credits taken/all credits paid back duly  2.385e-01  4.967e-01   0.480 0.631063    
credit_historyexisting credits paid back duly till now     6.930e-01  5.430e-01   1.276 0.201822    
credit_historyall credits at this bank paid back duly      1.302e+00  4.848e-01   2.685 0.007257 ** 
duration                                                  -3.422e-02  1.102e-02  -3.105 0.001900 ** 
employment_duration< 1 yr                                  8.312e-03  4.674e-01   0.018 0.985812    
employment_duration1 <= ... < 4 yrs                        2.089e-01  4.422e-01   0.472 0.636634    
employment_duration4 <= ... < 7 yrs                        9.245e-01  4.967e-01   1.861 0.062718 .  
employment_duration>= 7 yrs                                1.293e-01  4.474e-01   0.289 0.772610    
foreign_workerno                                          -1.164e+00  6.548e-01  -1.778 0.075348 .  
housingrent                                                6.107e-01  2.591e-01   2.357 0.018416 *  
housingown                                                 5.910e-01  5.560e-01   1.063 0.287846    
installment_rate.L                                        -8.743e-01  2.448e-01  -3.572 0.000355 ***
installment_rate.Q                                         5.084e-02  2.146e-01   0.237 0.812721    
installment_rate.C                                        -5.452e-02  2.219e-01  -0.246 0.805943    
jobunskilled - resident                                   -2.163e-01  7.037e-01  -0.307 0.758526    
jobskilled employee/official                              -1.860e-01  6.785e-01  -0.274 0.783924    
jobmanager/self-empl./highly qualif. employee             -3.160e-01  6.996e-01  -0.452 0.651536    
number_credits.L                                          -4.348e-01  7.620e-01  -0.571 0.568284    
number_credits.Q                                           5.699e-01  6.533e-01   0.872 0.383056    
number_credits.C                                           4.899e-01  5.020e-01   0.976 0.329072    
other_debtorsco-applicant                                 -3.478e-01  4.684e-01  -0.743 0.457778    
other_debtorsguarantor                                     9.728e-01  4.571e-01   2.128 0.033301 *  
other_installment_plansstores                              2.088e-01  4.840e-01   0.431 0.666206    
other_installment_plansnone                                4.289e-01  2.757e-01   1.555 0.119866    
people_liable0 to 2                                        4.150e-01  2.822e-01   1.470 0.141442    
personal_status_sexfemale : non-single or male : single    3.585e-01  4.321e-01   0.830 0.406754    
personal_status_sexmale : married/widowed                  9.844e-01  4.253e-01   2.315 0.020633 *  
personal_status_sexfemale : single                         4.560e-01  4.978e-01   0.916 0.359572    
present_residence.L                                       -3.701e-01  2.491e-01  -1.486 0.137256    
present_residence.Q                                        5.865e-01  2.295e-01   2.555 0.010605 *  
present_residence.C                                       -1.482e-01  2.212e-01  -0.670 0.502829    
propertycar or other                                      -5.933e-01  2.855e-01  -2.078 0.037690 *  
propertybuilding soc. savings agr./life insurance         -1.004e-01  2.721e-01  -0.369 0.712132    
propertyreal estate                                       -4.304e-01  4.997e-01  -0.861 0.389101    
purposecar (new)                                           1.388e+00  4.077e-01   3.404 0.000665 ***
purposecar (used)                                          4.804e-01  2.923e-01   1.644 0.100230    
purposefurniture/equipment                                 8.181e-01  2.849e-01   2.872 0.004077 ** 
purposeradio/television                                    3.062e-01  8.038e-01   0.381 0.703207    
purposedomestic appliances                                 2.865e-01  5.909e-01   0.485 0.627821    
purposerepairs                                            -1.949e-01  4.449e-01  -0.438 0.661328    
purposevacation                                            1.930e+00  1.162e+00   1.661 0.096720 .  
purposeretraining                                          7.339e-01  3.894e-01   1.885 0.059482 .  
purposebusiness                                            9.512e-01  8.640e-01   1.101 0.270937    
savings... <  100 DM                                       5.394e-01  3.279e-01   1.645 0.099955 .  
savings100 <= ... <  500 DM                                1.901e-01  4.238e-01   0.448 0.653793    
savings500 <= ... < 1000 DM                                1.613e+00  6.001e-01   2.687 0.007199 ** 
savings... >= 1000 DM                                      1.085e+00  3.031e-01   3.579 0.000345 ***
status... < 0 DM                                          -8.364e-02  2.449e-01  -0.342 0.732715    
status0<= ... < 200 DM                                     6.087e-01  4.113e-01   1.480 0.138920    
status... >= 200 DM / salary for at least 1 year           1.530e+00  2.610e-01   5.861 4.61e-09 ***
telephoneyes (under customer name)                         2.401e-01  2.295e-01   1.046 0.295368    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 985.71  on 799  degrees of freedom
Residual deviance: 722.54  on 745  degrees of freedom
AIC: 832.54

Number of Fisher Scoring iterations: 5

Random Forest

Just like the logistic regression, we could train a random forest instead. We use the fast implementation from the ranger package. For this, we first need to define the learner and then actually train it.

We now additionally supply the importance argument (importance = "permutation"). Doing so, we override the default and let the learner do feature importance determination based on permutation feature importance:

learner_rf = lrn("classif.ranger", importance = "permutation")
learner_rf$train(task, row_ids = train_set)

We can access the importance values using $importance():

learner_rf$importance()
                 status                duration                  amount          credit_history                     age 
           0.0366353018            0.0164922754            0.0110246421            0.0091906527            0.0058592432 
               property                 savings           other_debtors     employment_duration       present_residence 
           0.0057311771            0.0057076159            0.0044533084            0.0039690067            0.0038648741 
                purpose                 housing        installment_rate     personal_status_sex other_installment_plans 
           0.0034531584            0.0031057790            0.0023732275            0.0021878998            0.0019781819 
                    job          number_credits           people_liable               telephone          foreign_worker 
           0.0017709304            0.0013420634            0.0013335006            0.0008849702            0.0001876096 

In order to obtain a plot for the importance values, we convert the importance to a data.table and then process it with ggplot2:

importance = as.data.table(learner_rf$importance(), keep.rownames = TRUE)
colnames(importance) = c("Feature", "Importance")
ggplot(importance, aes(x = reorder(Feature, Importance), y = Importance)) +
  geom_col() + coord_flip() + xlab("")

Prediction

Let’s see what the models predict.

After training a model, the model can be used for prediction. Usually, prediction is the main purpose of machine learning models.

In our case, the model can be used to classify new credit applicants w.r.t. their associated credit risk (good vs. bad) on the basis of the features. Typically, machine learning models predict numeric values. In the regression case this is very natural. For classification, most models predict scores or probabilities. Based on these values, one can derive class predictions.

Predict Classes

First, we directly predict classes:

pred_logreg = learner_logreg$predict(task, row_ids = test_set)
pred_rf = learner_rf$predict(task, row_ids = test_set)
pred_logreg
<PredictionClassif> for 200 observations:
    row_ids truth response
          6  good     good
         14  good     good
         24  good     good
---                       
        995   bad      bad
        997   bad     good
       1000   bad     good
pred_rf
<PredictionClassif> for 200 observations:
    row_ids truth response
          6  good     good
         14  good      bad
         24  good     good
---                       
        995   bad      bad
        997   bad     good
       1000   bad     good

The $predict() method returns a Prediction object. It can be converted to a data.table if one wants to use it downstream.

We can also display the prediction results aggregated in a confusion matrix:

pred_logreg$confusion
        truth
response bad good
    bad   28   22
    good  27  123
pred_rf$confusion
        truth
response bad good
    bad   26   11
    good  29  134

Predict Probabilities

Most learners may not only predict a class variable (“response”), but also their degree of “belief” / “uncertainty” in a given response. Typically, we achieve this by setting the $predict_type slot of a Learner to "prob". Sometimes this needs to be done before the learner is trained. Alternatively, we can directly create the learner with this option: lrn("classif.log_reg", predict_type = "prob").

learner_logreg$predict_type = "prob"
learner_logreg$predict(task, row_ids = test_set)
<PredictionClassif> for 200 observations:
    row_ids truth response  prob.bad prob.good
          6  good     good 0.1219120 0.8780880
         14  good     good 0.3808073 0.6191927
         24  good     good 0.2255735 0.7744265
---                                           
        995   bad      bad 0.7827301 0.2172699
        997   bad     good 0.4924503 0.5075497
       1000   bad     good 0.4860011 0.5139989

Note that sometimes one needs to be cautious when dealing with the probability interpretation of the predictions.

Performance Evaluation

To measure the performance of a learner on new unseen data, we usually mimic the scenario of unseen data by splitting up the data into training and test set. The training set is used for training the learner, and the test set is only used for predicting and evaluating the performance of the trained learner. Numerous resampling methods (cross-validation, bootstrap) repeat the splitting process in different ways.

Within mlr3, we need to specify the resampling strategy using the rsmp() function:

resampling = rsmp("holdout", ratio = 2/3)
print(resampling)
<ResamplingHoldout> with 1 iterations
* Instantiated: FALSE
* Parameters: ratio=0.6667

Here, we use “holdout”, a simple train-test split (with just one iteration). We use the resample() function to undertake the resampling calculation:

res = resample(task, learner = learner_logreg, resampling = resampling)
res
<ResampleResult> of 1 iterations
* Task: GermanCredit
* Learner: classif.log_reg
* Warnings: 0 in 0 iterations
* Errors: 0 in 0 iterations

The default score of the measure is included in the $aggregate() slot:

res$aggregate()
classif.ce 
 0.2642643 

The default measure in this scenario is the classification error. Lower is better.

We can easily run different resampling strategies, e.g. repeated holdout ("subsampling"), or cross validation. Most methods perform repeated train/predict cycles on different data subsets and aggregate the result (usually as the mean). Doing this manually would require us to write loops. mlr3 does the job for us:

resampling = rsmp("subsampling", repeats = 10)
rr = resample(task, learner = learner_logreg, resampling = resampling)
rr$aggregate()
classif.ce 
 0.2546547 

Instead, we could also run cross-validation:

resampling = resampling = rsmp("cv", folds = 10)
rr = resample(task, learner = learner_logreg, resampling = resampling)
rr$aggregate()
classif.ce 
     0.253 

mlr3 features scores for many more measures. Here, we apply mlr_measures_classif.fpr for the false positive rate, and mlr_measures_classif.fnr for the false negative rate. Multiple measures can be provided as a list of measures (which can directly be constructed via msrs():

# false positive rate
rr$aggregate(msr("classif.fpr"))
classif.fpr 
  0.1394511 
# false positive rate and false negative
measures = msrs(c("classif.fpr", "classif.fnr"))
rr$aggregate(measures)
classif.fpr classif.fnr 
  0.1394511   0.5211055 

There are a few more resampling methods, and quite a few more measures (implemented in mlr3measures). They are automatically registered in the respective dictionaries:

mlr_resamplings
<DictionaryResampling> with 8 stored values
Keys: bootstrap, custom, cv, holdout, insample, loo, repeated_cv, subsampling
mlr_measures
<DictionaryMeasure> with 54 stored values
Keys: classif.acc, classif.auc, classif.bacc, classif.bbrier, classif.ce, classif.costs, classif.dor,
  classif.fbeta, classif.fdr, classif.fn, classif.fnr, classif.fomr, classif.fp, classif.fpr,
  classif.logloss, classif.mbrier, classif.mcc, classif.npv, classif.ppv, classif.prauc, classif.precision,
  classif.recall, classif.sensitivity, classif.specificity, classif.tn, classif.tnr, classif.tp,
  classif.tpr, debug, oob_error, regr.bias, regr.ktau, regr.mae, regr.mape, regr.maxae, regr.medae,
  regr.medse, regr.mse, regr.msle, regr.pbias, regr.rae, regr.rmse, regr.rmsle, regr.rrse, regr.rse,
  regr.rsq, regr.sae, regr.smape, regr.srho, regr.sse, selected_features, time_both, time_predict,
  time_train

To get help on a resampling method, use ?mlr_resamplings_xxx, for a measure do ?mlr_measures_xxx. You can also browse the mlr3 reference online.

Note that some measures, for example AUC, require the prediction of probabilities.

Performance Comparison and Benchmarks

We could compare Learners by evaluating resample() for each of them manually. However, benchmark() automatically performs resampling evaluations for multiple learners and tasks. benchmark_grid() creates fully crossed designs: Multiple Learners for multiple Tasks are compared w.r.t. multiple Resamplings.

learners = lrns(c("classif.log_reg", "classif.ranger"), predict_type = "prob")
bm_design = benchmark_grid(
  tasks = task,
  learners = learners,
  resamplings = rsmp("cv", folds = 10)
)
bmr = benchmark(bm_design)

Careful, large benchmarks may take a long time! This one should take less than a minute, however. In general, we want to use parallelization to speed things up on multi-core machines. For parallelization, mlr3 relies on the future package:

#future::plan("multiprocess") # uncomment for parallelization

In the benchmark we can compare different measures. Here, we look at the misclassification rate and the AUC:

measures = msrs(c("classif.ce", "classif.auc"))
performances = bmr$aggregate(measures)
performances[, c("learner_id", "classif.ce", "classif.auc")]
        learner_id classif.ce classif.auc
1: classif.log_reg      0.251   0.7775109
2:  classif.ranger      0.233   0.7957014

We see that the two models perform very similarly.

Deviating from hyperparameters defaults

The previously shown techniques build the backbone of a mlr3-featured machine learning workflow. However, in most cases one would never proceed in the way we did. While many R packages have carefully selected default settings, they will not perform optimally in any scenario. Typically, we can select the values of such hyperparameters. The (hyper)parameters of a Learner can be accessed and set via its ParamSet $param_set:

learner_rf$param_set
<ParamSet>
                              id    class lower upper nlevels        default    parents       value
 1:                        alpha ParamDbl  -Inf   Inf     Inf            0.5                       
 2:       always.split.variables ParamUty    NA    NA     Inf <NoDefault[3]>                       
 3:                class.weights ParamDbl  -Inf   Inf     Inf                                      
 4:                      holdout ParamLgl    NA    NA       2          FALSE                       
 5:                   importance ParamFct    NA    NA       4 <NoDefault[3]>            permutation
 6:                   keep.inbag ParamLgl    NA    NA       2          FALSE                       
 7:                    max.depth ParamInt  -Inf   Inf     Inf                                      
 8:                min.node.size ParamInt     1   Inf     Inf              1                       
 9:                     min.prop ParamDbl  -Inf   Inf     Inf            0.1                       
10:                      minprop ParamDbl  -Inf   Inf     Inf            0.1                       
11:                         mtry ParamInt     1   Inf     Inf <NoDefault[3]>                       
12:            num.random.splits ParamInt     1   Inf     Inf              1  splitrule            
13:                  num.threads ParamInt     1   Inf     Inf              1                      1
14:                    num.trees ParamInt     1   Inf     Inf            500                       
15:                    oob.error ParamLgl    NA    NA       2           TRUE                       
16:        regularization.factor ParamUty    NA    NA     Inf              1                       
17:      regularization.usedepth ParamLgl    NA    NA       2          FALSE                       
18:                      replace ParamLgl    NA    NA       2           TRUE                       
19:    respect.unordered.factors ParamFct    NA    NA       3         ignore                       
20:              sample.fraction ParamDbl     0     1     Inf <NoDefault[3]>                       
21:                  save.memory ParamLgl    NA    NA       2          FALSE                       
22: scale.permutation.importance ParamLgl    NA    NA       2          FALSE importance            
23:                    se.method ParamFct    NA    NA       2        infjack                       
24:                         seed ParamInt  -Inf   Inf     Inf                                      
25:         split.select.weights ParamDbl     0     1     Inf <NoDefault[3]>                       
26:                    splitrule ParamFct    NA    NA       2           gini                       
27:                      verbose ParamLgl    NA    NA       2           TRUE                       
28:                 write.forest ParamLgl    NA    NA       2           TRUE                       
                              id    class lower upper nlevels        default    parents       value
learner_rf$param_set$values = list(verbose = FALSE)

We can choose parameters for our learners in two distinct manners. If we have prior knowledge on how the learner should be (hyper-)parameterized, the way to go would be manually entering the parameters in the parameter set. In most cases, however, we would want to tune the learner so that it can search “good” model configurations itself. For now, we only want to compare a few models.

To get an idea on which parameters can be manipulated, we can investigate the parameters of the original package version or look into the parameter set of the learner:

## ?ranger::ranger
as.data.table(learner_rf$param_set)[, .(id, class, lower, upper)]
                              id    class lower upper
 1:                    num.trees ParamInt     1   Inf
 2:                         mtry ParamInt     1   Inf
 3:                   importance ParamFct    NA    NA
 4:                 write.forest ParamLgl    NA    NA
 5:                min.node.size ParamInt     1   Inf
 6:                      replace ParamLgl    NA    NA
 7:              sample.fraction ParamDbl     0     1
 8:                class.weights ParamDbl  -Inf   Inf
 9:                    splitrule ParamFct    NA    NA
10:            num.random.splits ParamInt     1   Inf
11:         split.select.weights ParamDbl     0     1
12:       always.split.variables ParamUty    NA    NA
13:    respect.unordered.factors ParamFct    NA    NA
14: scale.permutation.importance ParamLgl    NA    NA
15:                   keep.inbag ParamLgl    NA    NA
16:                      holdout ParamLgl    NA    NA
17:                  num.threads ParamInt     1   Inf
18:                  save.memory ParamLgl    NA    NA
19:                      verbose ParamLgl    NA    NA
20:                    oob.error ParamLgl    NA    NA
21:                    max.depth ParamInt  -Inf   Inf
22:                        alpha ParamDbl  -Inf   Inf
23:                     min.prop ParamDbl  -Inf   Inf
24:        regularization.factor ParamUty    NA    NA
25:      regularization.usedepth ParamLgl    NA    NA
26:                         seed ParamInt  -Inf   Inf
27:                      minprop ParamDbl  -Inf   Inf
28:                    se.method ParamFct    NA    NA
                              id    class lower upper

For the random forest two meaningful parameters which steer model complexity are num.trees and mtry. num.trees defaults to 500 and mtry to floor(sqrt(ncol(data) - 1)), in our case 4.

In the following we aim to train three different learners:

  1. The default random forest.
  2. A random forest with low num.trees and low mtry.
  3. A random forest with high num.trees and high mtry.

We will benchmark their performance on the German credit dataset. For this we construct the three learners and set the parameters accordingly:

rf_med = lrn("classif.ranger", id = "med", predict_type = "prob")

rf_low = lrn("classif.ranger", id = "low", predict_type = "prob",
  num.trees = 5, mtry = 2)

rf_high = lrn("classif.ranger", id = "high", predict_type = "prob",
  num.trees = 1000, mtry = 11)

Once the learners are defined, we can benchmark them:

learners = list(rf_low, rf_med, rf_high)
bm_design = benchmark_grid(
  tasks = task,
  learners = learners,
  resamplings = rsmp("cv", folds = 10)
)
bmr = benchmark(bm_design)
print(bmr)
<BenchmarkResult> of 30 rows with 3 resampling runs
 nr      task_id learner_id resampling_id iters warnings errors
  1 GermanCredit        low            cv    10        0      0
  2 GermanCredit        med            cv    10        0      0
  3 GermanCredit       high            cv    10        0      0

We compare misclassification rate and AUC again:

measures = msrs(c("classif.ce", "classif.auc"))
performances = bmr$aggregate(measures)
performances[, .(learner_id, classif.ce, classif.auc)]
   learner_id classif.ce classif.auc
1:        low      0.279   0.7140807
2:        med      0.234   0.7984136
3:       high      0.225   0.8039887
autoplot(bmr)

The “low” settings seem to underfit a bit, the “high” setting is comparable to the default setting “med”.

Outlook

This tutorial was a detailed introduction to machine learning workflows within mlr3. Having followed this tutorial you should be able to run your first models yourself. Next to that we spiked into performance evaluation and benchmarking. Furthermore, we showed how to customize learners.

The next parts of the tutorial will go more into depth into additional mlr3 topics:

Appendix

Tips

mlr_tasks
<DictionaryTask> with 11 stored values
Keys: boston_housing, breast_cancer, german_credit, iris, mtcars, penguins, pima, sonar, spam, wine, zoo
mlr_learners
<DictionaryLearner> with 29 stored values
Keys: classif.cv_glmnet, classif.debug, classif.featureless, classif.glmnet, classif.kknn, classif.lda,
  classif.log_reg, classif.multinom, classif.naive_bayes, classif.nnet, classif.qda, classif.ranger,
  classif.rpart, classif.svm, classif.xgboost, regr.cv_glmnet, regr.featureless, regr.glmnet, regr.kknn,
  regr.km, regr.lm, regr.ranger, regr.rpart, regr.svm, regr.xgboost, surv.cv_glmnet, surv.glmnet,
  surv.ranger, surv.xgboost
mlr_resamplings
<DictionaryResampling> with 8 stored values
Keys: bootstrap, custom, cv, holdout, insample, loo, repeated_cv, subsampling
mlr_measures
<DictionaryMeasure> with 54 stored values
Keys: classif.acc, classif.auc, classif.bacc, classif.bbrier, classif.ce, classif.costs, classif.dor,
  classif.fbeta, classif.fdr, classif.fn, classif.fnr, classif.fomr, classif.fp, classif.fpr,
  classif.logloss, classif.mbrier, classif.mcc, classif.npv, classif.ppv, classif.prauc, classif.precision,
  classif.recall, classif.sensitivity, classif.specificity, classif.tn, classif.tnr, classif.tp,
  classif.tpr, debug, oob_error, regr.bias, regr.ktau, regr.mae, regr.mape, regr.maxae, regr.medae,
  regr.medse, regr.mse, regr.msle, regr.pbias, regr.rae, regr.rmse, regr.rmsle, regr.rrse, regr.rse,
  regr.rsq, regr.sae, regr.smape, regr.srho, regr.sse, selected_features, time_both, time_predict,
  time_train
names(pred_rf)
 [1] ".__enclos_env__" "confusion"       "prob"            "response"        "missing"         "truth"          
 [7] "row_ids"         "man"             "predict_types"   "task_properties" "task_type"       "data"           
[13] "set_threshold"   "initialize"      "clone"           "score"           "help"            "print"          
[19] "format"         
class(pred_rf)
[1] "PredictionClassif" "Prediction"        "R6"               

Citation

For attribution, please cite this work as

Binder, et al. (2020, March 11). mlr3gallery: mlr3 Basics - German Credit. Retrieved from https://mlr3gallery.mlr-org.com/posts/2020-03-11-basics-german-credit/

BibTeX citation

@misc{binder2020mlr3,
  author = {Binder, Martin and Pfisterer, Florian and Lang, Michel},
  title = {mlr3gallery: mlr3 Basics - German Credit},
  url = {https://mlr3gallery.mlr-org.com/posts/2020-03-11-basics-german-credit/},
  year = {2020}
}