Tests

Learning objectives:

  • Understand how to write and run tests.
  • Explore examples of the types of test you can run for your simulation model.

Relevant reproducibility guidelines:

  • NHS Levels of RAP (πŸ₯ˆ): Pipeline includes a testing framework (unit tests, back tests).

Pre-reading:

This page will run tests on the model from the parallel processing page (likewise, also used on the scenario and sensitivity analysis page).

Entity generation β†’ Entity processing β†’ Initialisation bias β†’ Performance measures β†’ Replications β†’ Parallel processing β†’ Tests


Testing is the process of evaluating a model to ensure it works as expected, gives reliable results, and can handle different conditions. By systematically checking for errors, inconsistencies, or unexpected behaviors, testing helps improve the quality of a model, catch errors and prevent future issues. They can also be used to run test cases that check your results are consistent (i.e., that it is reproducible).

When you create a model, you will naturally carry out tests, with simple manual checks where you observe outputs and ensure they look right. These checks can be formalised and automated so that you can run them after any changes, and catch any issues that arise.

Introduction to writing and running tests

Writing a basic test

A popular framework for testing in python is pytest.

Each test in pytest is a function that contains an assertion statement to check a condition (e.g., number > 0). If the condition fails, pytest will return an error message (e.g. β€œThe number should be positive”).

Test are typically stored in a folder called tests/, and filenames start with the prefix test_. This naming convention allows pytest to automatically discover and run all the tests in the folder.

Here’s an example of a simple test using pytest:

import pytest


def test_positive():
    """
    Confirm that the number is positive.
    """
    number = 5
    assert number > 0, "The number should be positive"

A popular framework for testing in R is testthat.

Each test uses the test_that() functions and is structured around expectations that check specific conditions (e.g., expect_true(), expect_false(), expect_equal(), expect_error() - see package index for more).

Tests are typically stored in a folder called tests/testthat, and filenames start with the prefix test_. This naming convention allows testthat to automatically discover and run all tests in the folder.

Here’s an example of a simple test using testthat:

library(testthat)

Attaching package: 'testthat'
The following object is masked from 'package:tidyr':

    matches
The following object is masked from 'package:dplyr':

    matches
test_that("Confirm that the number is positive", {
  number <- 5L
  expect_gt(number, 0L)
})
Test passed πŸŽ‰

Running tests

Tests are typically run from the terminal. Commands include:

  • pytest - runs all tests.
  • pytest tests/filename.py - runs tests from a specific file.

When you run a test, you’ll see an output like this in the terminal:

NoteTest output:
============================= test session starts ==============================
platform linux -- Python 3.11.9, pytest-8.4.1, pluggy-1.6.0
rootdir: /__w/des_rap_book/des_rap_book/pages/verification_validation
plugins: anyio-4.11.0, timeout-2.4.0
collected 1 item

tests_resources/test_example_simple.py .                                 [100%]

============================== 1 passed in 0.01s ===============================
<ExitCode.OK: 0>

Tests are typically run from the R console. Commands include:

  • testthat::test_dir() - runs all tests in current directory (and sub-directories).
  • testthat::test_file("tests/testthat/test-filename.R") - run tests from a specific file.
  • devtools::test() - runs all tests in a package.

When you run a test, you’ll see output in the console indicating whether tests passed or failed.

Parametrised tests

We can execute the same test with different parameters using pytest.mark.parametrize.

Here’s an example:

import pytest


@pytest.mark.parametrize("number", [1, 2, 3, -1])
def test_positive_param(number):
    """
    Confirm that the number is positive.

    Arguments:
        number (float):
            Number to check.
    """
    assert number > 0, f"The number {number} is not positive."

In this example, we’re testing the same logic with four different values: 1, 2, 3, and -1. The last value, -1, will cause the test to fail. The error message includes the failed value for easy debugging.

NoteTest output:
============================= test session starts ==============================
platform linux -- Python 3.11.9, pytest-8.4.1, pluggy-1.6.0
rootdir: /__w/des_rap_book/des_rap_book/pages/verification_validation
plugins: anyio-4.11.0, timeout-2.4.0
collected 4 items

tests_resources/test_example_param.py ...F                               [100%]

=================================== FAILURES ===================================
___________________________ test_positive_param[-1] ____________________________

number = -1

    @pytest.mark.parametrize("number", [1, 2, 3, -1])
    def test_positive_param(number):
        """
        Confirm that the number is positive.
    
        Arguments:
            number (float):
                Number to check.
        """
>       assert number > 0, f"The number {number} is not positive."
E       AssertionError: The number -1 is not positive.
E       assert -1 > 0

tests_resources/test_example_param.py:13: AssertionError
=========================== short test summary info ============================
FAILED tests_resources/test_example_param.py::test_positive_param[-1] - AssertionError: The number -1 is not positive.
assert -1 > 0
========================= 1 failed, 3 passed in 0.16s ==========================
<ExitCode.TESTS_FAILED: 1>

We can execute the same test with different parameters using the patrick package.

Here’s an example:

library(patrick)
library(testthat)

with_parameters_test_that("Confirm that the number is positive", {
  expect_gt(number, 0L)
}, number = c(1L, 2L, 3L, -1L))

In this example, we’re testing the same logic with four different values: 1, 2, 3, and -1. The last value, -1, will cause the test to fail.


══ Testing test_example_param.R ════════════════════════════════════════════════

[ FAIL 0 | WARN 0 | SKIP 0 | PASS 0 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 1 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 2 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 3 ]
[ FAIL 1 | WARN 0 | SKIP 0 | PASS 3 ]

── Failure ('test_example_param.R:5:3'): Confirm that the number is positive number=-1 ──
`number` is not strictly more than 0L. Difference: -1
Backtrace:
    β–†
 1. β”œβ”€rlang::eval_tidy(code, args)
 2. └─testthat::expect_gt(number, 0L) at test_example_param.R:5:3

[ FAIL 1 | WARN 0 | SKIP 0 | PASS 3 ]

Testing the model

There are many different ways of categorising tests. We will focus on three types:

  • Functional testing
  • Unit testing
  • Back testing

Functional tests

Functional tests verify that the system or components perform their intended functionality.

For example, we expect that the number of arrivals should decrease if:

  • The patient inter-arrival time increases.
  • The length of the data collection period decreases.
import pytest
from simulation import Parameters, Runner


@pytest.mark.parametrize("param_name, initial_value, adjusted_value", [
    ("interarrival_time", 2, 15),
    ("data_collection_period", 2000, 500)
])
def test_arrivals_decrease(param_name, initial_value, adjusted_value):
    """
    Test that adjusting parameters reduces the number of arrivals as expected.
    """
    # Run model with initial value
    param = Parameters(**{param_name: initial_value})
    experiment = Runner(param)
    initial_arrivals = experiment.run_single(run=0)["run"]["arrivals"]

    # Run model with adjusted value
    param = Parameters(**{param_name: adjusted_value})
    experiment = Runner(param)
    adjusted_arrivals = experiment.run_single(run=0)["run"]["arrivals"]

    # Check that arrivals from adjusted model are less
    assert initial_arrivals > adjusted_arrivals, (
        f'Changing "{param_name}" from {initial_value} to {adjusted_value} ' +
        "did not decrease the number of arrivals as expected: observed " +
        f"{initial_arrivals} and {adjusted_arrivals} arrivals, respectively."
    )
NoteTest output
============================= test session starts ==============================
platform linux -- Python 3.11.9, pytest-8.4.1, pluggy-1.6.0
rootdir: /__w/des_rap_book/des_rap_book/pages/verification_validation
plugins: anyio-4.11.0, timeout-2.4.0
collected 2 items

tests_resources/test_functional.py ..                                    [100%]

============================== 2 passed in 0.26s ===============================
<ExitCode.OK: 0>
library(testthat)
library(patrick)

with_parameters_test_that("Test that adjusting parameters reduces arrivals", {
  # Run model with initial value
  init_param <- create_params(number_of_runs = 1L)
  init_param[[param_name]] <- init_value
  init_results <- runner(init_param)[["run_results"]]

  # Run model with adjusted value
  adj_param <- create_params(number_of_runs = 1L)
  adj_param[[param_name]] <- adj_value
  adj_results <- runner(adj_param)[["run_results"]]

  # Check that arrivals from adjusted model are less
  expect_lt(adj_results[["arrivals"]], init_results[["arrivals"]])
},
cases(
  list(param_name = "interarrival_time",
       init_value = 2L, adj_value = 15L),
  list(param_name = "data_collection_period",
       init_value = 2000L, adj_value = 500L)
))
Test passed 🎊
Test passed πŸ˜€

Unit tests

Unit tests are a type of functional testing that focuses on individual components (e.g. methods, classes) and tests them in isolation to ensure they work as intended.

For example, we expect that our model should fail if the number of doctors or the patient inter-arrival time were set to 0. This is tested using test_zero_inputs:

import pytest
from simulation import Parameters, Model


@pytest.mark.parametrize("param_name, message", [
    ("number_of_doctors", '"capacity" must be > 0.'),
    ("interarrival_time", "mean must be positive, got 0")
])
def test_zero_inputs(param_name, message):
    """
    Check that the model fails when inputs that are zero are used.

    Parameters
    ----------
    param_name : str
        Name of parameter to change in the parameter class.
    message : str
        Error message that expect to see.
    """
    # Create parameter class with value set to zero
    param = Parameters(**{param_name: 0})

    # Verify that initialising the model raises an error
    with pytest.raises(ValueError, match=message):
        Model(param=param, run_number=0)
NoteTest output
============================= test session starts ==============================
platform linux -- Python 3.11.9, pytest-8.4.1, pluggy-1.6.0
rootdir: /__w/des_rap_book/des_rap_book/pages/verification_validation
plugins: anyio-4.11.0, timeout-2.4.0
collected 2 items

tests_resources/test_unit.py ..                                          [100%]

============================== 2 passed in 0.01s ===============================
<ExitCode.OK: 0>
library(patrick)
library(R.utils)
library(testthat)

with_parameters_test_that("Check that model fails with zero inputs", {
  # Create parameter object with value set to zero
  param <- create_params()
  param[[param_name]] <- 0L

  # Verify that initialising the model raises an error
  expect_error(
    withTimeout(
      model(param = param, run_number = 0L),
      timeout = 3L,
      onTimeout = "error"
    ),
    "must be greater than 0"
  )
}, param_name = c("number_of_doctors", "interarrival_time"))

These tests fail as we do not have error handling for these values. In fact, with inter-arrival time set to 0, the model will run infinitely and crash your computer! Hence the use of withTimeout in the test.

To address this, we could add error handling which raises an error for users if they try to input a value of 0 (see the parameter validation page).


══ Testing test_unit.R ═════════════════════════════════════════════════════════

[ FAIL 0 | WARN 0 | SKIP 0 | PASS 0 ]
[ FAIL 1 | WARN 0 | SKIP 0 | PASS 0 ]
[ FAIL 2 | WARN 0 | SKIP 0 | PASS 0 ]

── Failure ('test_unit.R:11:3'): Check that model fails with zero inputs param_name=number_of_doctors ──
`withTimeout(...)` threw an error with unexpected message.
Expected match: "must be greater than 0"
Actual message: "β„Ή In argument: `wait_time = .data[[\"serve_start\"]] -\n  .data[[\"start_time\"]]`.\nCaused by error in `.data[[\"serve_start\"]]`:\n! Column `serve_start` not found in `.data`."
Backtrace:
    β–†
 1. β”œβ”€rlang::eval_tidy(code, args)
 2. └─testthat::expect_error(...) at test_unit.R:11:3

── Error ('test_unit.R:11:3'): Check that model fails with zero inputs param_name=interarrival_time ──
<subscriptOutOfBoundsError/error/condition>
Error in `.subset2(cnd, "rlang")`: subscript out of bounds
Backtrace:
     β–†
  1. β”œβ”€rlang::eval_tidy(code, args)
  2. β”œβ”€testthat::expect_error(...) at test_unit.R:11:3
  3. β”‚ └─testthat:::quasi_capture(...) at testthat/R/expect-condition.R:126:5
  4. β”‚   β”œβ”€testthat (local) .capture(...) at testthat/R/quasi-label.R:54:3
  5. β”‚   β”‚ └─base::withCallingHandlers(...) at testthat/R/deprec-condition.R:23:5
  6. β”‚   └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) at testthat/R/quasi-label.R:54:3
  7. β”œβ”€R.utils::withTimeout(...) at rlang/R/eval.R:96:3
  8. β”‚ └─base::tryCatch(...) at R.utils/R/withTimeout.R:149:3
  9. β”‚   └─base (local) tryCatchList(expr, classes, parentenv, handlers)
 10. β”‚     └─base (local) tryCatchOne(expr, names, parentenv, handlers[[1L]])
 11. β”‚       └─value[[3L]](cond)
 12. β”‚         β”œβ”€R.oo::throw(ex) at R.utils/R/withTimeout.R:159:9
 13. β”‚         └─R.oo::throw.Exception(ex) at staging/1/R.oo:1:14
 14. β”‚           └─base::signalCondition(this) at R.oo/R/Exception.R:313:3
 15. └─testthat (local) `<fn>`(`<TmtExcpt>`)
 16.   └─rlang::cnd_entrace(cnd) at testthat/R/deprec-condition.R:7:7
 17.     └─rlang:::cnd_some(cnd, function(x) !is_null(x[["trace"]])) at rlang/R/cnd-entrace.R:242:3

[ FAIL 2 | WARN 0 | SKIP 0 | PASS 0 ]

Back tests

Back tests check that the model code produces results consistent with those generated historically/from prior code.

First, we’ll generate a set of expected results, with a specific set of parameters. Although this may seem unnecessary in this case, as they match our default parameters, these are still specified to ensure that we are testing on the same parameters, even if defaults change.

param = Parameters(
    interarrival_time=5,
    consultation_time=10,
    number_of_doctors=3,
    warm_up_period=30,
    data_collection_period=40,
    number_of_runs=5,
    verbose=False
)
param <- create_params(
  interarrival_time = 5L,
  consultation_time = 10L,
  number_of_doctors = 3L,
  warm_up_period = 30L,
  data_collection_period = 40L,
  number_of_runs = 5L, verbose = FALSE
)

We’ll then run the model and save the results to .csv files.

runner = Runner(param=param)
results = runner.run_reps()
results["patient"].to_csv("tests_resources/python_patient.csv", index=False)
results["run"].to_csv("tests_resources/python_run.csv", index=False)
results["overall"].to_csv("tests_resources/python_overall.csv", index=False)
results <- runner(param)
write.csv(arrange(results[["arrivals"]], replication, start_time),
          file.path("tests_resources", "r_arrivals.csv"),
          row.names = FALSE)
write.csv(results[["resources"]],
          file.path("tests_resources", "r_resources.csv"),
          row.names = FALSE)
write.csv(results[["run_results"]],
          file.path("tests_resources", "r_run_results.csv"),
          row.names = FALSE)

In the test, we’ll run the same model parameters, then import and compare against the saved .csv file to check for any differences.

from pathlib import Path

import pandas as pd
from simulation import Parameters, Runner


def test_reproduction():
    """
    Check that results from particular run of the model match those
    previously generated using the code.
    """
    # Define model parameters
    param = Parameters(
        interarrival_time=5,
        consultation_time=10,
        number_of_doctors=3,
        warm_up_period=30,
        data_collection_period=40,
        number_of_runs=5,
        verbose=False
    )

    # Run simulation
    runner = Runner(param=param)
    results = runner.run_reps()

    # Import expected results
    exp_patient = pd.read_csv(
        Path(__file__).parent.joinpath("python_patient.csv")
    )
    exp_run = pd.read_csv(
        Path(__file__).parent.joinpath("python_run.csv")
    )
    exp_overall = pd.read_csv(
        Path(__file__).parent.joinpath("python_overall.csv")
    )

    # Compare results
    pd.testing.assert_frame_equal(results["patient"], exp_patient)
    pd.testing.assert_frame_equal(results["run"], exp_run)
    pd.testing.assert_frame_equal(
        results["overall"].reset_index(drop=True), exp_overall
    )
NoteTest output
============================= test session starts ==============================
platform linux -- Python 3.11.9, pytest-8.4.1, pluggy-1.6.0
rootdir: /__w/des_rap_book/des_rap_book/pages/verification_validation
plugins: anyio-4.11.0, timeout-2.4.0
collected 1 item

tests_resources/test_back.py .                                           [100%]

============================== 1 passed in 0.03s ===============================
<ExitCode.OK: 0>
testthat::test_file(file.path("tests_resources", "test_back.R"))

══ Testing test_back.R ═════════════════════════════════════════════════════════

[ FAIL 0 | WARN 0 | SKIP 0 | PASS 0 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 1 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 2 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 3 ] Done!

We generate the expected results for our backtest in a seperate Python file or Jupyter notebook, rather than within the test itself.

We generate the expected results for our backtest in a seperate R file or R markdown file, rather than within the test itself.

We then would generally run tests using the same pre-generated .csv files, without regenerating them. However, the test will fail if the model logic is intentionally changed, leading to different results from the same parameters.

In that case, if we are certain that these changes are the reason for differing results, we should re-run the Python file or notebook to regenerate the .csv.

In that case, if we are certain that these changes are the reason for differing results, we should re-run the R file or R markdown file to regenerate the .csv.

It is crucial to exercise caution when doing this, to avoid unintentionally overwriting correct expected results.

Test coverage

Coverage refers to the percentage of your code that is executed when you run your tests. It can help you spot parts of your code that are not included in any tests.

The pytest-cov package can be used to run coverage calculations alongside pytest. After installing it, simply run tests with the --cov flag:

pytest --cov

See the GitHub actions page for guidance on generating a coverage badge for your README within CI/CD.


If your research is structured as a package, then you can use devtools to calculate coverage:

devtools::test_coverage()


If not structured as a package, you can use covr’s file_coverage() function - for example:

file_coverage(source_files = c("param.R", "model.R"),
              test_files = c("test-unit.R", "test-back.R"))

Explore the example models

GitHub Click to visit pydesrap_mms repository

GitHub Click to visit pydesrap_stroke repository

A wide variety of tests are available in the tests/ folder.

Data for back tests are generated within notebooks/generate_exp_results.ipynb.

Tests are run and coverage are calculated on pushes to main via GitHub actions (.github/workflows/tests.yaml).

GitHub Click to visit rdesrap_mms repository

GitHub Click to visit rdesrap_stroke repository

A wide variety of tests are available in the tests/ folder.

Data for back tests are generated within notebooks/generate_exp_results.Rmd.

Tests are run via GitHub actions (.github/workflows/R-CMD-check.yaml) and a coverage badge is updated (.github/workflows/test-coverage.yaml).

Test yourself

Write tests! Look at example models for inspiration on what and how to test.

Start writing tests early, and run them often to catch issues as you develop.

Use coverage calculates to help you spot parts of you code not run in any tests.