Ludvig Renbo Olsen | Portfolio

  • da
  • en




Running cross_validate from cvms in parallel

The cvms package is useful for cross-validating a list of linear and logistic regression model formulas in R. To speed up the process, I’ve added the option to cross-validate the models in parallel. In this post, I will walk you through a simple example and introduce the combine_predictors() function, which generates model formulas by combining a list of fixed effects. We will be using the simple participant.scores dataset from cvms.

First, we will install the newest versions of cvms and groupdata2 from GitHub. You will also need the doParallel package.

# Install packages

Then, we attach the packages and set the random seed to 1.

# Attach packages
library(cvms) # cross_validate, combine_predictors
library(groupdata2) # fold
library(doParallel) # registerDoParallel

# Set seed for reproducibility
# Note that R versions < 3.6.0 may give different results

Now, we will create the folds for cross-validation. This simply adds a factor in the dataset called .folds with folds identifiers (e.g. 1,1,1,2,2,3,3,…). We will also ensure that we have a similar ratio of the two diagnoses in the folds, and that all rows pertaining to a participant is put in the same fold.

# Create folds in the dataset
data <- fold(participant.scores, k = 4,
cat_col = "diagnosis",
id_col = "participant")

We will use the combine_predictors() function to generate our model formulas. We supply the list of fixed effects (we will use age and score) and it combines them with and without interactions. Note that when we have more than 6 fixed effects, it becomes very slow due to the number of the possible combinations. To deal with this, it has some options to limit the number of fixed effects per formula, along with the maximum size of included interactions. We will not use those here though.

# Generate model formulas with combine_predictors()
models <- combine_predictors(dependent = "diagnosis",
fixed_effects = c("age", "score"))

### [1] "diagnosis ~ age" "diagnosis ~ score"
### [3] "diagnosis ~ age * score" "diagnosis ~ age + score"

We want to test if running cross_validate() in parallel is faster than running it sequentially. This would be hard to tell with only 4 simple models, so we repeat the model formulas 100 times each.

# Repeat formulas 100 times
models_repeated <- rep(models, each = 100)

Now we can cross-validate with and without parallelization. We will start without it.

# Cross-validate the model formulas without parallelization
system.time({cv_1 <- cross_validate(data,
models = models_repeated,
family = "binomial")})

### user system elapsed
### 26.290 0.194 26.595

This took 26.595 seconds to run.

For the parallelization, we will use the doParallel package. There are other options out there though.

First, we register the number of CPU cores to use. I will use 4 cores.

# Register CPU cores

Then, we simply set parallel to TRUE in cross_validate().

# Cross-validate the model formulas with parallelization
system.time({cv_2 <- cross_validate(data,
models = models_repeated,
family = "binomial",
parallel = TRUE)})

### user system elapsed
### 39.274 1.845 10.955

This time it took only 10.955 seconds!

As these formulas are very simple, and the dataset is very small, it’s difficult to estimate how much time the parallelization will save in the real world. If we were cross-validating a lot of larger models on a big dataset, it could be a meaningful option.

In this post, you have learned to run cross_validate() in parallel. This functionality can also be found in validate(), and I have also added it to the new baseline() function, which I will cover in a future post. It creates baseline evaluations, so we have something to compare our awesome models to. Pretty neat!
You have also learned to generate model formulas with combine_predictors().

Repeated cross-validation in cvms and groupdata2

I have spent the last couple of days adding functionality for performing repeated cross-validation to cvms and groupdata2. In this quick post I will show an example.

(Please note: At the moment, you need to use the github version of groupdata2. I hope to update it on CRAN this month.)

In cross-validation, we split our training set into a number (often denoted “k”) of groups called folds. We repeatedly train our machine learning model on k-1 folds and test it on the last fold, such that each fold becomes test set once. Then we average the results and celebrate with food and music.

The benefits of using groupdata2 to create the folds are 1) that it allows us to balance the ratios of our output classes (or simply a categorical column, if we are working with linear regression instead of classification), and 2) that it allows us to keep all observations with a specific ID (e.g. participant/user ID) in the same fold to avoid leakage between the folds.

The benefit of cvms is that it trains all the models and outputs a tibble (data frame) with results, predictions, model coefficients, and other sweet stuff, which is easy to add to a report or do further analyses on. It even allows us to cross-validate multiple model formulas at once to quickly compare them and select the best model.

Repeated Cross-validation

In repeated cross-validation we simply repeat this process a couple of times, training the model on more combinations of our training set observations. The more combinations, the less one bad split of the data would impact our evaluation of the model.

For each repetition, we evaluate our model as we would have in regular cross-validation. Then we average the results from the repetitions and go back to food and music.


As stated, the role of groupdata2 is to create the folds. Normally it creates one column in the dataset called “.folds”, which contains a fold identifier for each observation (e.g. 1,1,2,2,3,3,1,1,3,3,2,2). In repeated cross-validation it simply creates multiple of such fold columns (“.folds_1”, “.folds_2”, etc.). It also makes sure they are unique, so we actually train on different subsets.

# Install groupdata2 and cvms from github

# Attach packages
library(cvms) # cross_validate()
library(groupdata2) # fold()
library(knitr) # kable()
library(dplyr) # %>%

# Set seed for reproducibility

# Load data
data <- participant.scores

# Fold data
# Create 3 fold columns
# cat_col is the categorical column to balance between folds
# id_col is the column with IDs. Observations with the same ID will be put in the same fold.
# num_fold_cols determines the number of fold columns, and thereby the number of repetitions.
data <- fold(data, k = 4, cat_col = 'diagnosis', id_col = 'participant', num_fold_cols = 3)

# Show first 15 rows of data
data %>% head(10) %>% kable()

Data Subset with 3 Fold Columns

Data Subset with 3 Fold Columns


In the cross_validate function, we specify our model formula for a logistic regression that classifies diagnosis. cvms currently supports linear regression and logistic regression, including mixed effects modelling. In the fold_cols (previously called folds_col), we specify the fold column names.

CV <- cross_validate(data, "diagnosis~score",
fold_cols = c('.folds_1','.folds_2','.folds_3'),

# Show results

Repeated CV results 1

Output tibble

Due to the number of metrics and useful information, it helps to break up the output into parts:

CV %>% select(1:7) %>% kable()

Repeated CV metrics 1

Evaluation metrics (subset 1)

CV %>% select(8:14) %>% kable()

Repeated CV metrics 2

Evaluation metrics (subset 2)

CV$Predictions[[1]] %>% head() %>% kable()

Repeated CV nested predictions

Nested predictions (subset)

CV$`Confusion Matrix`[[1]] %>% head() %>% kable()

Repeated CV nested confusion matrices

Nested confusion matrices (subset)

CV$Coefficients[[1]] %>% head() %>% kable()

Repeated CV Nested model coefficients

Nested model coefficients (subset)

CV$Results[[1]] %>% select(1:8) %>% kable()

Repeated CV nested results per fold column

Nested results per fold column (subset)


We could have trained multiple models at once by simply adding more model formulas. That would add rows to the output, making it easy to compare the models.

The linear regression version has different evaluation metrics. These are listed in the help page at ?cross_validate.


cvms and groupdata2 now have the functionality for performing repeated cross-validation. We have briefly talked about this technique and gone through a short example. Check out cvms for more 🙂