# Basic Recipes

This document demonstrates some basic uses of recipes. First, some definitions are required:

• variables are the original (raw) data columns in a data frame or tibble. For example, in a traditional formula Y ~ A + B + A:B, the variables are A, B, and Y.
• roles define how variables will be used in the model. Examples are: predictor (independent variables), response, and case weight. This is meant to be open-ended and extensible.
• terms are columns in a design matrix such as A, B, and A:B. These can be other derived entities that are grouped, such as a set of principal components or a set of columns, that define a basis function for a variable. These are synonymous with features in machine learning. Variables that have predictor roles would automatically be main effect terms.

## An Example

The packages contains a data set used to predict whether a person will pay back a bank loan. It has 13 predictor columns and a factor variable Status (the outcome). We will first separate the data into a training and test set:

library(recipes)
library(rsample)
library(modeldata)

data("credit_data")

set.seed(55)
train_test_split <- initial_split(credit_data)

credit_train <- training(train_test_split)
credit_test <- testing(train_test_split)

Note that there are some missing values in these data:

vapply(credit_train, function(x) mean(!is.na(x)), numeric(1))
#>    Status Seniority      Home      Time       Age   Marital   Records       Job
#>     1.000     1.000     0.999     1.000     1.000     1.000     1.000     1.000
#>  Expenses    Income    Assets      Debt    Amount     Price
#>     1.000     0.919     0.988     0.996     1.000     1.000

Rather than remove these, their values will be imputed.

The idea is that the preprocessing operations will all be created using the training set and then these steps will be applied to both the training and test set.

## An Initial Recipe

First, we will create a recipe object from the original data and then specify the processing steps.

Recipes can be created manually by sequentially adding roles to variables in a data set.

If the analysis only requires outcomes and predictors, the easiest way to create the initial recipe is to use the standard formula method:

rec_obj <- recipe(Status ~ ., data = credit_train)
rec_obj
#> Data Recipe
#>
#> Inputs:
#>
#>       role #variables
#>    outcome          1
#>  predictor         13

The data contained in the data argument need not be the training set; this data is only used to catalog the names of the variables and their types (e.g. numeric, etc.).

(Note that the formula method is used here to declare the variables, their roles and nothing else. If you use inline functions (e.g. log) it will complain. These types of operations can be added later.)

## Preprocessing Steps

From here, preprocessing steps for some step X can be added sequentially in one of two ways:

rec_obj <- step_{X}(rec_obj, arguments)    ## or
rec_obj <- rec_obj %>% step_{X}(arguments)

step_dummy and the other functions will always return updated recipes.

One other important facet of the code is the method for specifying which variables should be used in different steps. The manual page ?selections has more details but dplyr-like selector functions can be used:

• use basic variable names (e.g. x1, x2),
• dplyr functions for selecting variables: contains, ends_with, everything, matches, num_range, and starts_with,
• functions that subset on the role of the variables that have been specified so far: all_outcomes, all_predictors, has_role, or
• similar functions for the type of data: all_nominal, all_numeric, and has_type.

Note that the methods listed above are the only ones that can be used to select variables inside the steps. Also, minus signs can be used to deselect variables.

For our data, we can add an operation to impute the predictors. There are many ways to do this and recipes includes a few steps for this purpose:

grep("impute\$", ls("package:recipes"), value = TRUE)
#>  [1] "step_bagimpute"          "step_knnimpute"
#>  [3] "step_lowerimpute"        "step_meanimpute"
#>  [5] "step_medianimpute"       "step_modeimpute"
#>  [7] "step_rollimpute"         "tunable.step_bagimpute"
#>  [9] "tunable.step_knnimpute"  "tunable.step_meanimpute"
#> [11] "tunable.step_rollimpute"

Here, K-nearest neighbor imputation will be used. This works for both numeric and non-numeric predictors and defaults K to five To do this, it selects all predictors and then removes those that are numeric:

imputed <- rec_obj %>%
step_knnimpute(all_predictors())
imputed
#> Data Recipe
#>
#> Inputs:
#>
#>       role #variables
#>    outcome          1
#>  predictor         13
#>
#> Operations:
#>
#> K-nearest neighbor imputation for all_predictors()

It is important to realize that the specific variables have not been declared yet (as shown when the recipe is printed above). In some preprocessing steps, variables will be added or removed from the current list of possible variables.

Since some predictors are categorical in nature (i.e. nominal), it would make sense to convert these factor predictors into numeric dummy variables (aka indicator variables) using step_dummy. To do this, the step selects all predictors then removes those that are numeric:

ind_vars <- imputed %>%
step_dummy(all_predictors(), -all_numeric())
ind_vars
#> Data Recipe
#>
#> Inputs:
#>
#>       role #variables
#>    outcome          1
#>  predictor         13
#>
#> Operations:
#>
#> K-nearest neighbor imputation for all_predictors()
#> Dummy variables from all_predictors(), -all_numeric()

At this point in the recipe, all of the predictor should be encoded as numeric, we can further add more steps to center and scale them:

standardized <- ind_vars %>%
step_center(all_predictors())  %>%
step_scale(all_predictors())
standardized
#> Data Recipe
#>
#> Inputs:
#>
#>       role #variables
#>    outcome          1
#>  predictor         13
#>
#> Operations:
#>
#> K-nearest neighbor imputation for all_predictors()
#> Dummy variables from all_predictors(), -all_numeric()
#> Centering for all_predictors()
#> Scaling for all_predictors()

If these are the only preprocessing steps for the predictors, we can now estimate the means and standard deviations from the training set. The prep function is used with a recipe and a data set:

trained_rec <- prep(standardized, training = credit_train)
trained_rec
#> Data Recipe
#>
#> Inputs:
#>
#>       role #variables
#>    outcome          1
#>  predictor         13
#>
#> Training data contained 3341 data points and 303 incomplete rows.
#>
#> Operations:
#>
#> K-nearest neighbor imputation for Home, Time, Age, Marital, Records, ... [trained]
#> Dummy variables from Home, Marital, Records, Job [trained]
#> Centering for Seniority, Time, Age, Expenses, Income, ... [trained]
#> Scaling for Seniority, Time, Age, Expenses, Income, ... [trained]

Note that the real variables are listed (e.g. Home etc.) instead of the selectors (all_predictors()).

Now that the statistics have been estimated, the preprocessing can be applied to the training and test set:

train_data <- bake(trained_rec, new_data = credit_train)
test_data  <- bake(trained_rec, new_data = credit_test)

bake returns a tibble that, by default, includes all of the variables:

class(test_data)
#> [1] "tbl_df"     "tbl"        "data.frame"
test_data
#> # A tibble: 1,113 x 23
#>    Seniority   Time      Age Expenses   Income  Assets   Debt Amount  Price
#>        <dbl>  <dbl>    <dbl>    <dbl>    <dbl>   <dbl>  <dbl>  <dbl>  <dbl>
#>  1   0.257   -0.706  0.826      1.75   0.718   -0.199  -0.273  2.05   2.53
#>  2   2.61     0.921  0.643      0.987 -0.221    0.386  -0.273  1.21   0.577
#>  3   0.133   -2.33  -0.909     -1.05  -0.784   -0.449  -0.273 -1.76  -0.589
#>  4   0.00966  0.921 -0.635      0.987  0.705   -0.0316  1.86   0.995  0.330
#>  5   0.875   -1.52   1.37      -1.05   2.34     0.929  -0.273  0.995  1.24
#>  6   3.10    -1.52   2.84       0.479  0.718   -0.0316  1.43  -0.912 -0.165
#>  7   2.24     0.921  1.28      -0.539  0.00434 -0.157  -0.273  0.677 -0.153
#>  8  -0.979    0.108  0.00402   -1.05  -0.371   -0.265  -0.273  0.995  0.660
#>  9   1.99     0.921  0.461      0.987 -0.0707  -0.115  -0.273 -0.276 -0.467
#> 10  -0.979    0.921 -0.453     -0.539 -0.00817  0.136   2.28   0.465  0.192
#> # … with 1,103 more rows, and 14 more variables: Status <fct>, Home_X1 <dbl>,
#> #   Home_X2 <dbl>, Home_X3 <dbl>, Home_X4 <dbl>, Home_X5 <dbl>,
#> #   Marital_X1 <dbl>, Marital_X2 <dbl>, Marital_X3 <dbl>, Marital_X4 <dbl>,
#> #   Records_X1 <dbl>, Job_X1 <dbl>, Job_X2 <dbl>, Job_X3 <dbl>
vapply(test_data, function(x) mean(!is.na(x)), numeric(1))
#>  Seniority       Time        Age   Expenses     Income     Assets       Debt
#>          1          1          1          1          1          1          1
#>     Amount      Price     Status    Home_X1    Home_X2    Home_X3    Home_X4
#>          1          1          1          1          1          1          1
#>    Home_X5 Marital_X1 Marital_X2 Marital_X3 Marital_X4 Records_X1     Job_X1
#>          1          1          1          1          1          1          1
#>     Job_X2     Job_X3
#>          1          1

Selectors can also be used. For example, if only the predictors are needed, you can use bake(object, new_data, all_predictors()).

There are a number of other steps included in the package:

#>  [1] "step_BoxCox"        "step_YeoJohnson"    "step_arrange"
#>  [4] "step_bagimpute"     "step_bin2factor"    "step_bs"
#>  [7] "step_center"        "step_classdist"     "step_corr"
#> [10] "step_count"         "step_cut"           "step_date"
#> [13] "step_depth"         "step_discretize"    "step_downsample"
#> [16] "step_dummy"         "step_factor2string" "step_filter"
#> [19] "step_geodist"       "step_holiday"       "step_hyperbolic"
#> [22] "step_ica"           "step_integer"       "step_interact"
#> [25] "step_intercept"     "step_inverse"       "step_invlogit"
#> [28] "step_isomap"        "step_knnimpute"     "step_kpca"
#> [31] "step_kpca_poly"     "step_kpca_rbf"      "step_lag"
#> [34] "step_lincomb"       "step_log"           "step_logit"
#> [37] "step_lowerimpute"   "step_meanimpute"    "step_medianimpute"
#> [40] "step_modeimpute"    "step_mutate"        "step_mutate_at"
#> [43] "step_naomit"        "step_nnmf"          "step_normalize"
#> [46] "step_novel"         "step_ns"            "step_num2factor"
#> [49] "step_nzv"           "step_ordinalscore"  "step_other"
#> [52] "step_pca"           "step_pls"           "step_poly"
#> [55] "step_profile"       "step_range"         "step_ratio"
#> [58] "step_regex"         "step_relevel"       "step_relu"
#> [61] "step_rename"        "step_rename_at"     "step_rm"
#> [64] "step_rollimpute"    "step_sample"        "step_scale"
#> [67] "step_shuffle"       "step_slice"         "step_spatialsign"
#> [70] "step_sqrt"          "step_string2factor" "step_unknown"
#> [73] "step_unorder"       "step_upsample"      "step_window"
#> [76] "step_zv"

## Checks

Another type of operation that can be added to a recipes is a check. Checks conduct some sort of data validation and, if no issue is found, returns the data as-is; otherwise, an error is thrown.

For example, check_missing will fail if any of the variables selected for validation have missing values. This check is done when the recipe is prepared as well as when any data are baked. Checks are added in the same way as steps:

trained_rec <- trained_rec %>%
check_missing(contains("Marital"))

Currently, recipes includes:

#> [1] "check_class"      "check_cols"       "check_missing"    "check_name"
#> [5] "check_new_values" "check_range"      "check_type"