import pytest
def test_positive():
"""
Confirm that the number is positive.
"""
= 5
number assert number > 0, "The number should be positive"
Tests
Learning objectives:
- Understand how to write and run tests.
- Explore examples of the types of test you can run for your simulation model.
Relevant reproducibility guidelines:
- NHS Levels of RAP (π₯): Pipeline includes a testing framework (unit tests, back tests).
Pre-reading:
This page will run tests on the model from the parallel processing page (likewise, also used on the scenario and sensitivity analysis page).
Entity generation β Entity processing β Initialisation bias β Performance measures β Replications β Parallel processing β Tests
Testing is the process of evaluating a model to ensure it works as expected, gives reliable results, and can handle different conditions. By systematically checking for errors, inconsistencies, or unexpected behaviors, testing helps improve the quality of a model, catch errors and prevent future issues. They can also be used to run test cases that check your results are consistent (i.e., that it is reproducible).
When you create a model, you will naturally carry out tests, with simple manual checks where you observe outputs and ensure they look right. These checks can be formalised and automated so that you can run them after any changes, and catch any issues that arise.
Introduction to writing and running tests
Writing a basic test
A popular framework for testing in python is pytest.
Each test in pytest is a function that contains an assertion statement to check a condition (e.g., number > 0
). If the condition fails, pytest will return an error message (e.g. βThe number should be positiveβ).
Test are typically stored in a folder called tests/
, and filenames start with the prefix test_
. This naming convention allows pytest to automatically discover and run all the tests in the folder.
Hereβs an example of a simple test using pytest:
A popular framework for testing in R is testthat.
Each test uses the test_that()
functions and is structured around expectations that check specific conditions (e.g., expect_true()
, expect_false()
, expect_equal()
, expect_error()
- see package index for more).
Tests are typically stored in a folder called tests/testthat
, and filenames start with the prefix test_
. This naming convention allows testthat
to automatically discover and run all tests in the folder.
Hereβs an example of a simple test using testthat:
library(testthat)
Attaching package: 'testthat'
The following object is masked from 'package:tidyr':
matches
The following object is masked from 'package:dplyr':
matches
test_that("Confirm that the number is positive", {
<- 5L
number expect_gt(number, 0L)
})
Test passed π
Running tests
Tests are typically run from the terminal. Commands include:
pytest
- runs all tests.pytest tests/filename.py
- runs tests from a specific file.
When you run a test, youβll see an output like this in the terminal:
Tests are typically run from the R console. Commands include:
testthat::test_dir()
- runs all tests in current directory (and sub-directories).testthat::test_file("tests/testthat/test-filename.R")
- run tests from a specific file.devtools::test()
- runs all tests in a package.
When you run a test, youβll see output in the console indicating whether tests passed or failed.
Parametrised tests
We can execute the same test with different parameters using pytest.mark.parametrize
.
Hereβs an example:
import pytest
@pytest.mark.parametrize("number", [1, 2, 3, -1])
def test_positive_param(number):
"""
Confirm that the number is positive.
Arguments:
number (float):
Number to check.
"""
assert number > 0, f"The number {number} is not positive."
In this example, weβre testing the same logic with four different values: 1
, 2
, 3
, and -1
. The last value, -1
, will cause the test to fail. The error message includes the failed value for easy debugging.
We can execute the same test with different parameters using the patrick package.
Hereβs an example:
library(patrick)
library(testthat)
with_parameters_test_that("Confirm that the number is positive", {
expect_gt(number, 0L)
number = c(1L, 2L, 3L, -1L)) },
In this example, weβre testing the same logic with four different values: 1
, 2
, 3
, and -1
. The last value, -1
, will cause the test to fail.
ββ Testing test_example_param.R ββββββββββββββββββββββββββββββββββββββββββββββββ
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 0 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 1 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 2 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 3 ]
[ FAIL 1 | WARN 0 | SKIP 0 | PASS 3 ]
ββ Failure ('test_example_param.R:5:3'): Confirm that the number is positive number=-1 ββ
`number` is not strictly more than 0L. Difference: -1
Backtrace:
β
1. ββrlang::eval_tidy(code, args)
2. ββtestthat::expect_gt(number, 0L) at test_example_param.R:5:3
[ FAIL 1 | WARN 0 | SKIP 0 | PASS 3 ]
Testing the model
There are many different ways of categorising tests. We will focus on three types:
- Functional testing
- Unit testing
- Back testing
Functional tests
Functional tests verify that the system or components perform their intended functionality.
For example, we expect that the number of arrivals should decrease if:
- The patient inter-arrival time increases.
- The length of the data collection period decreases.
import pytest
from simulation import Parameters, Runner
@pytest.mark.parametrize("param_name, initial_value, adjusted_value", [
"interarrival_time", 2, 15),
("data_collection_period", 2000, 500)
(
])def test_arrivals_decrease(param_name, initial_value, adjusted_value):
"""
Test that adjusting parameters reduces the number of arrivals as expected.
"""
# Run model with initial value
= Parameters(**{param_name: initial_value})
param = Runner(param)
experiment = experiment.run_single(run=0)["run"]["arrivals"]
initial_arrivals
# Run model with adjusted value
= Parameters(**{param_name: adjusted_value})
param = Runner(param)
experiment = experiment.run_single(run=0)["run"]["arrivals"]
adjusted_arrivals
# Check that arrivals from adjusted model are less
assert initial_arrivals > adjusted_arrivals, (
f'Changing "{param_name}" from {initial_value} to {adjusted_value} ' +
"did not decrease the number of arrivals as expected: observed " +
f"{initial_arrivals} and {adjusted_arrivals} arrivals, respectively."
)
library(testthat)
library(patrick)
with_parameters_test_that("Test that adjusting parameters reduces arrivals", {
# Run model with initial value
<- create_params(number_of_runs = 1L)
init_param <- init_value
init_param[[param_name]] <- runner(init_param)[["run_results"]]
init_results
# Run model with adjusted value
<- create_params(number_of_runs = 1L)
adj_param <- adj_value
adj_param[[param_name]] <- runner(adj_param)[["run_results"]]
adj_results
# Check that arrivals from adjusted model are less
expect_lt(adj_results[["arrivals"]], init_results[["arrivals"]])
},cases(
list(param_name = "interarrival_time",
init_value = 2L, adj_value = 15L),
list(param_name = "data_collection_period",
init_value = 2000L, adj_value = 500L)
))
Test passed π
Test passed π
Unit tests
Unit tests are a type of functional testing that focuses on individual components (e.g. methods, classes) and tests them in isolation to ensure they work as intended.
For example, we expect that our model should fail if the number of doctors or the patient inter-arrival time were set to 0. This is tested using test_zero_inputs
:
import pytest
from simulation import Parameters, Model
@pytest.mark.parametrize("param_name, message", [
"number_of_doctors", '"capacity" must be > 0.'),
("interarrival_time", "mean must be positive, got 0")
(
])def test_zero_inputs(param_name, message):
"""
Check that the model fails when inputs that are zero are used.
Parameters
----------
param_name : str
Name of parameter to change in the parameter class.
message : str
Error message that expect to see.
"""
# Create parameter class with value set to zero
= Parameters(**{param_name: 0})
param
# Verify that initialising the model raises an error
with pytest.raises(ValueError, match=message):
=param, run_number=0) Model(param
library(patrick)
library(R.utils)
library(testthat)
with_parameters_test_that("Check that model fails with zero inputs", {
# Create parameter object with value set to zero
<- create_params()
param <- 0L
param[[param_name]]
# Verify that initialising the model raises an error
expect_error(
withTimeout(
model(param = param, run_number = 0L),
timeout = 3L,
onTimeout = "error"
),"must be greater than 0"
)param_name = c("number_of_doctors", "interarrival_time")) },
These tests fail as we do not have error handling for these values. In fact, with inter-arrival time set to 0, the model will run infinitely and crash your computer! Hence the use of withTimeout
in the test.
To address this, we could add error handling which raises an error for users if they try to input a value of 0 (see the parameter validation page).
ββ Testing test_unit.R βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 0 ]
[ FAIL 1 | WARN 0 | SKIP 0 | PASS 0 ]
[ FAIL 2 | WARN 0 | SKIP 0 | PASS 0 ]
ββ Failure ('test_unit.R:11:3'): Check that model fails with zero inputs param_name=number_of_doctors ββ
`withTimeout(...)` threw an error with unexpected message.
Expected match: "must be greater than 0"
Actual message: "βΉ In argument: `wait_time = .data[[\"serve_start\"]] -\n .data[[\"start_time\"]]`.\nCaused by error in `.data[[\"serve_start\"]]`:\n! Column `serve_start` not found in `.data`."
Backtrace:
β
1. ββrlang::eval_tidy(code, args)
2. ββtestthat::expect_error(...) at test_unit.R:11:3
ββ Error ('test_unit.R:11:3'): Check that model fails with zero inputs param_name=interarrival_time ββ
<subscriptOutOfBoundsError/error/condition>
Error in `.subset2(cnd, "rlang")`: subscript out of bounds
Backtrace:
β
1. ββrlang::eval_tidy(code, args)
2. ββtestthat::expect_error(...) at test_unit.R:11:3
3. β ββtestthat:::quasi_capture(...) at testthat/R/expect-condition.R:126:5
4. β ββtestthat (local) .capture(...) at testthat/R/quasi-label.R:54:3
5. β β ββbase::withCallingHandlers(...) at testthat/R/deprec-condition.R:23:5
6. β ββrlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo)) at testthat/R/quasi-label.R:54:3
7. ββR.utils::withTimeout(...) at rlang/R/eval.R:96:3
8. β ββbase::tryCatch(...) at R.utils/R/withTimeout.R:149:3
9. β ββbase (local) tryCatchList(expr, classes, parentenv, handlers)
10. β ββbase (local) tryCatchOne(expr, names, parentenv, handlers[[1L]])
11. β ββvalue[[3L]](cond)
12. β ββR.oo::throw(ex) at R.utils/R/withTimeout.R:159:9
13. β ββR.oo::throw.Exception(ex) at staging/1/R.oo:1:14
14. β ββbase::signalCondition(this) at R.oo/R/Exception.R:313:3
15. ββtestthat (local) `<fn>`(`<TmtExcpt>`)
16. ββrlang::cnd_entrace(cnd) at testthat/R/deprec-condition.R:7:7
17. ββrlang:::cnd_some(cnd, function(x) !is_null(x[["trace"]])) at rlang/R/cnd-entrace.R:242:3
[ FAIL 2 | WARN 0 | SKIP 0 | PASS 0 ]
Back tests
Back tests check that the model code produces results consistent with those generated historically/from prior code.
First, weβll generate a set of expected results, with a specific set of parameters. Although this may seem unnecessary in this case, as they match our default parameters, these are still specified to ensure that we are testing on the same parameters, even if defaults change.
= Parameters(
param =5,
interarrival_time=10,
consultation_time=3,
number_of_doctors=30,
warm_up_period=40,
data_collection_period=5,
number_of_runs=False
verbose )
<- create_params(
param interarrival_time = 5L,
consultation_time = 10L,
number_of_doctors = 3L,
warm_up_period = 30L,
data_collection_period = 40L,
number_of_runs = 5L, verbose = FALSE
)
Weβll then run the model and save the results to .csv
files.
= Runner(param=param)
runner = runner.run_reps()
results "patient"].to_csv("tests_resources/python_patient.csv", index=False)
results["run"].to_csv("tests_resources/python_run.csv", index=False)
results["overall"].to_csv("tests_resources/python_overall.csv", index=False) results[
<- runner(param)
results write.csv(arrange(results[["arrivals"]], replication, start_time),
file.path("tests_resources", "r_arrivals.csv"),
row.names = FALSE)
write.csv(results[["resources"]],
file.path("tests_resources", "r_resources.csv"),
row.names = FALSE)
write.csv(results[["run_results"]],
file.path("tests_resources", "r_run_results.csv"),
row.names = FALSE)
In the test, weβll run the same model parameters, then import and compare against the saved .csv
file to check for any differences.
from pathlib import Path
import pandas as pd
from simulation import Parameters, Runner
def test_reproduction():
"""
Check that results from particular run of the model match those
previously generated using the code.
"""
# Define model parameters
= Parameters(
param =5,
interarrival_time=10,
consultation_time=3,
number_of_doctors=30,
warm_up_period=40,
data_collection_period=5,
number_of_runs=False
verbose
)
# Run simulation
= Runner(param=param)
runner = runner.run_reps()
results
# Import expected results
= pd.read_csv(
exp_patient __file__).parent.joinpath("python_patient.csv")
Path(
)= pd.read_csv(
exp_run __file__).parent.joinpath("python_run.csv")
Path(
)= pd.read_csv(
exp_overall __file__).parent.joinpath("python_overall.csv")
Path(
)
# Compare results
"patient"], exp_patient)
pd.testing.assert_frame_equal(results["run"], exp_run)
pd.testing.assert_frame_equal(results[
pd.testing.assert_frame_equal("overall"].reset_index(drop=True), exp_overall
results[ )
::test_file(file.path("tests_resources", "test_back.R")) testthat
ββ Testing test_back.R βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 0 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 1 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 2 ]
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 3 ] Done!
We generate the expected results for our backtest in a seperate Python file or Jupyter notebook, rather than within the test itself.
We generate the expected results for our backtest in a seperate R file or R markdown file, rather than within the test itself.
We then would generally run tests using the same pre-generated .csv
files, without regenerating them. However, the test will fail if the model logic is intentionally changed, leading to different results from the same parameters.
In that case, if we are certain that these changes are the reason for differing results, we should re-run the Python file or notebook to regenerate the .csv
.
In that case, if we are certain that these changes are the reason for differing results, we should re-run the R file or R markdown file to regenerate the .csv
.
It is crucial to exercise caution when doing this, to avoid unintentionally overwriting correct expected results.
Test coverage
Coverage refers to the percentage of your code that is executed when you run your tests. It can help you spot parts of your code that are not included in any tests.
The pytest-cov package can be used to run coverage calculations alongside pytest. After installing it, simply run tests with the --cov
flag:
pytest --cov
See the GitHub actions page for guidance on generating a coverage badge for your README within CI/CD.
If your research is structured as a package, then you can use devtools
to calculate coverage:
::test_coverage() devtools
If not structured as a package, you can use covrβs file_coverage()
function - for example:
file_coverage(source_files = c("param.R", "model.R"),
test_files = c("test-unit.R", "test-back.R"))
Explore the example models
Click to visit pydesrap_mms repository
Click to visit pydesrap_stroke repository
A wide variety of tests are available in the tests/
folder.
Data for back tests are generated within notebooks/generate_exp_results.ipynb
.
Tests are run and coverage are calculated on pushes to main via GitHub actions (.github/workflows/tests.yaml
).
Click to visit rdesrap_mms repository
Click to visit rdesrap_stroke repository
A wide variety of tests are available in the tests/
folder.
Data for back tests are generated within notebooks/generate_exp_results.Rmd
.
Tests are run via GitHub actions (.github/workflows/R-CMD-check.yaml
) and a coverage badge is updated (.github/workflows/test-coverage.yaml
).
Test yourself
Write tests! Look at example models for inspiration on what and how to test.
Start writing tests early, and run them often to catch issues as you develop.
Use coverage calculates to help you spot parts of you code not run in any tests.