Feature Selection on the Titanic Data Set

mlr3fselect optimization feature selection nested resampling titanic data set classification

We give a short introduction to mlr3fselect.

Marc Becker
01-08-2021

Introduction

In this tutorial, we introduce the mlr3fselect package by comparing feature selection methods on the Titanic disaster data set. The objective of feature selection is to enhance the interpretability of models, speed up the learning process and increase the predictive performance.

Titanic data set

The Titanic data set contains data for 887 Titanic passengers, including whether they survived when the Titanic sank. Our goal will be to predict the survival of the Titanic passengers.

After loading the data set from the mlr3data package, we impute the missing age values with the median age of the passengers, set missing embarked values to "s" and remove character features. We could use feature engineering to create new features from the character features, however we want to focus on feature selection in this tutorial.

In addition to the survived column, the reduced data set contains the following attributes for each passenger:

Feature Description
age Age
sex Sex
sib_sp Number of siblings / spouses aboard
parch Number of parents / children aboard
fare Amount paid for the ticket
pc_class Passenger class
embarked Port of embarkation
library(mlr3data)

data("titanic", package = "mlr3data")
titanic$age[is.na(titanic$age)] = median(titanic$age, na.rm = TRUE)
titanic$embarked[is.na(titanic$embarked)] = "S"
titanic$ticket = NULL
titanic$name = NULL
titanic$cabin = NULL
titanic = titanic[!is.na(titanic$survived),]

We construct a binary classification task.

library(mlr3)

task = TaskClassif$new(id = "titanic", backend = titanic, target = "survived", positive = "yes")

Model

We use the logistic regression learner provided by the mlr3learners package.

library(mlr3learners)

learner = lrn("classif.log_reg")

To evaluate the predictive performance, we choose a 3-fold cross-validation and the classification error as the measure.

resampling = rsmp("cv", folds = 3)
measure = msr("classif.ce")

resampling$instantiate(task)

Classes

The FSelectInstanceSingleCrit class specifies a general feature selection scenario. It includes the ObjectiveFSelect object that encodes the black box objective function which is optimized by a feature selection algorithm. The evaluated feature sets are stored in an ArchiveFSelect object. The archive provides a method for querying the best performing feature set.

The Terminator classes determine when to stop the feature selection. In this example we choose a terminator that stops the feature selection after 10 seconds. The sugar functions trm() and trms() can be used to retrieve terminators from the mlr_terminators dictionary.

library(mlr3fselect)

terminator = trm("run_time", secs = 10)
FSelectInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = resampling,
  measure = measure,
  terminator = terminator)
<FSelectInstanceSingleCrit>
* State:  Not optimized
* Objective: <ObjectiveFSelect:classif.log_reg_on_titanic>
* Search Space:
<ParamSet>
         id    class lower upper nlevels        default value
1:      age ParamLgl    NA    NA       2 <NoDefault[3]>      
2: embarked ParamLgl    NA    NA       2 <NoDefault[3]>      
3:     fare ParamLgl    NA    NA       2 <NoDefault[3]>      
4:    parch ParamLgl    NA    NA       2 <NoDefault[3]>      
5:   pclass ParamLgl    NA    NA       2 <NoDefault[3]>      
6:      sex ParamLgl    NA    NA       2 <NoDefault[3]>      
7:   sib_sp ParamLgl    NA    NA       2 <NoDefault[3]>      
* Terminator: <TerminatorRunTime>
* Terminated: FALSE
* Archive:
<ArchiveFSelect>
Null data.table (0 rows and 0 cols)

The FSelector subclasses describe the feature selection strategy. The sugar function fs() can be used to retrieve feature selection algorithms from the mlr_fselectors dictionary.

mlr_fselectors
<DictionaryFSelect> with 6 stored values
Keys: design_points, exhaustive_search, genetic_search,
  random_search, rfe, sequential

Random search randomly draws feature sets and evaluates them in batches. We retrieve the FSelectorRandomSearch class with the fs() sugar function and choose TerminatorEvals. We set the n_evals parameter to 10 which means that 10 feature sets are evaluated.

terminator = trm("evals", n_evals = 10)
instance = FSelectInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = resampling,
  measure = measure,
  terminator = terminator)
fselector = fs("random_search", batch_size = 5)

The feature selection is started by passing the FSelectInstanceSingleCrit object to the $optimize() method of FSelectorRandomSearch which generates the feature sets. These features set are internally passed to the $eval_batch() method of FSelectInstanceSingleCrit which evaluates them with the objective function and stores the results in the archive. This general interaction between the objects of mlr3fselect stays the same for the different feature selection methods. However, the way how new feature sets are generated differs depending on the chosen FSelector subclass.

fselector$optimize(instance)
    age embarked fare parch pclass  sex sib_sp
1: TRUE    FALSE TRUE  TRUE   TRUE TRUE   TRUE
                           features classif.ce
1: age,fare,parch,pclass,sex,sib_sp  0.2020202

The ArchiveFSelect stores a data.table::data.table() which consists of the evaluated feature sets and the corresponding estimated predictive performances.

as.data.table(instance$archive)[, 1:8]
      age embarked  fare parch pclass   sex sib_sp classif.ce
 1:  TRUE     TRUE  TRUE  TRUE   TRUE  TRUE   TRUE  0.2031425
 2:  TRUE    FALSE FALSE FALSE  FALSE FALSE   TRUE  0.3838384
 3: FALSE    FALSE FALSE  TRUE  FALSE FALSE   TRUE  0.3804714
 4: FALSE    FALSE  TRUE FALSE  FALSE FALSE  FALSE  0.3288440
 5: FALSE    FALSE  TRUE FALSE  FALSE  TRUE  FALSE  0.2188552
 6: FALSE    FALSE FALSE FALSE   TRUE FALSE  FALSE  0.3209877
 7:  TRUE    FALSE FALSE FALSE  FALSE FALSE   TRUE  0.3838384
 8:  TRUE    FALSE  TRUE  TRUE   TRUE  TRUE   TRUE  0.2020202
 9:  TRUE     TRUE  TRUE  TRUE   TRUE  TRUE   TRUE  0.2031425
10:  TRUE    FALSE  TRUE  TRUE  FALSE FALSE  FALSE  0.3389450

The associated resampling iterations can be accessed in the BenchmarkResult by calling

instance$archive$benchmark_result
<BenchmarkResult> of 30 rows with 10 resampling runs
 nr task_id             learner_id resampling_id iters warnings
  1 titanic select.classif.log_reg            cv     3        0
  2 titanic select.classif.log_reg            cv     3        0
  3 titanic select.classif.log_reg            cv     3        0
  4 titanic select.classif.log_reg            cv     3        0
  5 titanic select.classif.log_reg            cv     3        0
  6 titanic select.classif.log_reg            cv     3        0
  7 titanic select.classif.log_reg            cv     3        0
  8 titanic select.classif.log_reg            cv     3        0
  9 titanic select.classif.log_reg            cv     3        0
 10 titanic select.classif.log_reg            cv     3        0
 errors
      0
      0
      0
      0
      0
      0
      0
      0
      0
      0

We retrieve the best performing feature set with

instance$result
    age embarked fare parch pclass  sex sib_sp
1: TRUE    FALSE TRUE  TRUE   TRUE TRUE   TRUE
                           features classif.ce
1: age,fare,parch,pclass,sex,sib_sp  0.2020202

Sequential forward selection

We try sequential forward selection. We chose TerminatorStagnation that stops the feature selection if the predictive performance does not increase anymore.

terminator = trm("stagnation", iters = 5)
instance = FSelectInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = resampling,
  measure = measure,
  terminator = terminator)

fselector = fs("sequential")
fselector$optimize(instance)
     age embarked  fare parch pclass  sex sib_sp
1: FALSE    FALSE FALSE  TRUE   TRUE TRUE   TRUE
                  features classif.ce
1: parch,pclass,sex,sib_sp  0.1964085

The FSelectorSequential object has a special method for displaying the optimization path of the sequential feature selection.

fselector$optimization_path(instance)
   get  age embarked  fare parch pclass   sex sib_sp classif.ce
1:   1 TRUE    FALSE FALSE FALSE  FALSE FALSE  FALSE  0.3838384
2:   2 TRUE    FALSE FALSE FALSE  FALSE  TRUE  FALSE  0.2132435
3:   3 TRUE    FALSE FALSE FALSE  FALSE  TRUE   TRUE  0.2087542
4:   4 TRUE    FALSE FALSE FALSE   TRUE  TRUE   TRUE  0.2143659
5:   5 TRUE    FALSE FALSE  TRUE   TRUE  TRUE   TRUE  0.2065095
6:   6 TRUE    FALSE  TRUE  TRUE   TRUE  TRUE   TRUE  0.2020202
                                  uhash           timestamp batch_nr
1: b88369c9-922b-4b06-8c38-d29c715575cc 2021-06-22 04:56:20        1
2: e8beeff4-1624-481f-bd23-e7096a92744a 2021-06-22 04:56:22        2
3: 9c52f877-65a9-4b21-9593-a56b345f3961 2021-06-22 04:56:24        3
4: 4347ef17-d11b-45e2-b81e-066ccb3c2283 2021-06-22 04:56:25        4
5: 62f1305d-f472-4677-9be8-d86ccca7a558 2021-06-22 04:56:26        5
6: e20adb22-061a-44e6-ab4a-0f3062fc54d3 2021-06-22 04:56:27        6

Recursive feature elimination

Recursive feature elimination utilizes the $importance() method of learners. In each iteration the feature(s) with the lowest importance score is dropped. We choose the non-recursive algorithm (recursive = FALSE) which calculates the feature importance once on the complete feature set. The recursive version (recursive = TRUE) recomputes the feature importance on the reduced feature set in every iteration.

learner = lrn("classif.ranger", importance = "impurity")
terminator = trm("none")
instance = FSelectInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = resampling,
  measure = measure,
  terminator = terminator,
  store_models = TRUE)

fselector = fs("rfe", recursive = FALSE)
fselector$optimize(instance)
    age embarked fare parch pclass  sex sib_sp
1: TRUE     TRUE TRUE  TRUE   TRUE TRUE   TRUE
                                 features classif.ce
1: age,embarked,fare,parch,pclass,sex,...  0.1694725

We access the results.

as.data.table(instance$archive)[, 1:8]
     age embarked  fare parch pclass  sex sib_sp classif.ce
1:  TRUE     TRUE  TRUE  TRUE   TRUE TRUE   TRUE  0.1694725
2:  TRUE    FALSE  TRUE FALSE  FALSE TRUE  FALSE  0.2143659
3: FALSE    FALSE FALSE FALSE  FALSE TRUE  FALSE  0.2132435

Nested resampling

It is a common mistake to report the predictive performance estimated on resampling sets during the feature selection as the performance that can be expected from the combined feature selection and model training. The repeated evaluation of the model might leak information about the test sets into the model and thus leads to over-fitting and over-optimistic performance results. Nested resampling uses an outer and inner resampling to separate the feature selection from the performance estimation of the model. We can use the AutoFSelector class for running nested resampling. The AutoFSelector essentially combines a given Learner and feature selection method into a Learner with internal automatic feature selection. The inner resampling loop that is used to determine the best feature set is conducted internally each time the AutoFSelector Learner object is trained.

resampling_inner = rsmp("cv", folds = 5)
measure = msr("classif.ce")

at = AutoFSelector$new(
  learner = learner,
  resampling = resampling_inner,
  measure = measure,
  terminator = terminator,
  fselect = fs("sequential"),
  store_models = TRUE)

We put the AutoFSelector into a resample() call to get the outer resampling loop.

resampling_outer = rsmp("cv", folds = 3)

rr = resample(task, at, resampling_outer, store_models = TRUE)

The aggregated performance of all outer resampling iterations is the unbiased predictive performance we can expected from the logistic regression model with an optimized feature set found by sequential selection.

rr$aggregate()
classif.ce 
 0.1840629 

We check whether the feature sets that were selected in the inner resampling are stable. The selected feature sets should not differ too much. We might observe unstable models in this example because the small data set and the low number of resampling iterations might introduces too much randomness. Usually, we aim for the selection of similar feature sets for all outer training sets.

do.call(rbind, lapply(rr$learners, function(x) x$fselect_result))
    age embarked  fare parch pclass  sex sib_sp
1: TRUE     TRUE  TRUE FALSE   TRUE TRUE   TRUE
2: TRUE     TRUE FALSE FALSE   TRUE TRUE   TRUE
3: TRUE     TRUE FALSE  TRUE  FALSE TRUE   TRUE
                              features classif.ce
1: age,embarked,fare,pclass,sex,sib_sp  0.1599202
2:      age,embarked,pclass,sex,sib_sp  0.1497650
3:       age,embarked,parch,sex,sib_sp  0.1902293

Next, we want to compare the predictive performances estimated on the outer resampling to the inner resampling. Significantly lower predictive performances on the outer resampling indicate that the models with the optimized feature sets overfit the data.

rr$score()
                task task_id             learner
1: <TaskClassif[46]> titanic <AutoFSelector[38]>
2: <TaskClassif[46]> titanic <AutoFSelector[38]>
3: <TaskClassif[46]> titanic <AutoFSelector[38]>
                 learner_id         resampling resampling_id
1: classif.ranger.fselector <ResamplingCV[19]>            cv
2: classif.ranger.fselector <ResamplingCV[19]>            cv
3: classif.ranger.fselector <ResamplingCV[19]>            cv
   iteration              prediction classif.ce
1:         1 <PredictionClassif[19]>  0.1649832
2:         2 <PredictionClassif[19]>  0.2289562
3:         3 <PredictionClassif[19]>  0.1582492

The archive of the AutoFSelector gives us all evaluated hyperparameter configurations (i.e. feature sets) with the associated predictive performances.

rr$learners[[1]]$archive$data[, 1:8]
      age embarked  fare parch pclass   sex sib_sp classif.ce
 1:  TRUE    FALSE FALSE FALSE  FALSE FALSE  FALSE  0.4090585
 2: FALSE     TRUE FALSE FALSE  FALSE FALSE  FALSE  0.3635949
 3: FALSE    FALSE  TRUE FALSE  FALSE FALSE  FALSE  0.3316906
 4: FALSE    FALSE FALSE  TRUE  FALSE FALSE  FALSE  0.3754024
 5: FALSE    FALSE FALSE FALSE   TRUE FALSE  FALSE  0.3114798
 6: FALSE    FALSE FALSE FALSE  FALSE  TRUE  FALSE  0.2254949
 7: FALSE    FALSE FALSE FALSE  FALSE FALSE   TRUE  0.3720553
 8:  TRUE    FALSE FALSE FALSE  FALSE  TRUE  FALSE  0.2170916
 9: FALSE     TRUE FALSE FALSE  FALSE  TRUE  FALSE  0.2254949
10: FALSE    FALSE  TRUE FALSE  FALSE  TRUE  FALSE  0.2254949
11: FALSE    FALSE FALSE  TRUE  FALSE  TRUE  FALSE  0.2238143
12: FALSE    FALSE FALSE FALSE   TRUE  TRUE  FALSE  0.2238428
13: FALSE    FALSE FALSE FALSE  FALSE  TRUE   TRUE  0.2221478
14:  TRUE     TRUE FALSE FALSE  FALSE  TRUE  FALSE  0.2221336
15:  TRUE    FALSE  TRUE FALSE  FALSE  TRUE  FALSE  0.2221336
16:  TRUE    FALSE FALSE  TRUE  FALSE  TRUE  FALSE  0.2137445
17:  TRUE    FALSE FALSE FALSE   TRUE  TRUE  FALSE  0.2053269
18:  TRUE    FALSE FALSE FALSE  FALSE  TRUE   TRUE  0.2036320
19:  TRUE     TRUE FALSE FALSE  FALSE  TRUE   TRUE  0.2087025
20:  TRUE    FALSE  TRUE FALSE  FALSE  TRUE   TRUE  0.1986469
21:  TRUE    FALSE FALSE  TRUE  FALSE  TRUE   TRUE  0.2019798
22:  TRUE    FALSE FALSE FALSE   TRUE  TRUE   TRUE  0.2036604
23:  TRUE     TRUE  TRUE FALSE  FALSE  TRUE   TRUE  0.1952713
24:  TRUE    FALSE  TRUE  TRUE  FALSE  TRUE   TRUE  0.1919100
25:  TRUE    FALSE  TRUE FALSE   TRUE  TRUE   TRUE  0.1817690
26:  TRUE     TRUE  TRUE FALSE   TRUE  TRUE   TRUE  0.1599202
27:  TRUE    FALSE  TRUE  TRUE   TRUE  TRUE   TRUE  0.1716565
28:  TRUE     TRUE  TRUE  TRUE   TRUE  TRUE   TRUE  0.1666287
      age embarked  fare parch pclass   sex sib_sp classif.ce

Citation

For attribution, please cite this work as

Becker (2021, Jan. 8). mlr3gallery: Feature Selection on the Titanic Data Set. Retrieved from https://mlr3gallery.mlr-org.com/posts/2020-09-14-mlr3fselect-basic/

BibTeX citation

@misc{becker2021feature,
  author = {Becker, Marc},
  title = {mlr3gallery: Feature Selection on the Titanic Data Set},
  url = {https://mlr3gallery.mlr-org.com/posts/2020-09-14-mlr3fselect-basic/},
  year = {2021}
}