Guidelines
In guiding you to develop reproducible discrete-event simulation (DES) models, this book draws from two relevant guidelines:
Recommendations for reproducible DES from Heather et al. (2025).
A framework for reproducible analytical pipelines (RAP) from the NHS RAP Community of Practice.
Each chapter demonstrates how to put these guidelines into practice. Use the links below (🔗) to jump straight to relevant pages for each item in the frameworks.
This page may be particularly helpful if you not necessarily building a healthcare DES model, but are simply interested in improving reproducibility more generally.
1 Heather et al. (2025) Recommendations
Show/Hide Recommendations
As part of the project STARS (Sharing Tools and Artefacts for Reproducible Simulations), a series of computational reproducibility assessments were conducted by Heather et al. 2025. From these, several recommendations were shared to support reproducibility of healthcare discrete-event simulation (DES) models, as described in:
Heather, A. Monks, T. Harper, A. Mustafee, N. Mayne, A. On the reproducibility of discrete-event simulation studies in health research: an empirical study using open models (2025). arxiv. https://doi.org/10.48550/arXiv.2501.13137.
Those marked with a star (⭐) were identified as having the greatest impact in Heather et al. 2025.
1.1 Recommendations to support reproduction
Recommendation | Chapter |
---|---|
Set-up | |
Share code with an open licence ⭐ | Licensing |
Link publication to a specific version of the code | Changelog |
List dependencies and versions | Dependency management |
Running the model | |
Provide code for all scenarios and sensitivity analyses ⭐ | Scenario analysis Sensitivity analysis |
Ensure model parameters are correct ⭐ | Full run |
Control randomness | Randomness |
Outputs | |
Include code to calculate all required model outputs ⭐ | Performance measures |
Include code to generate the tables, figures, and other reported results ⭐ | Producing tables and figures |
1.2 Recommendations to support troubleshooting and reuse
Recommendation | Chapter |
---|---|
Design | |
Separate model code from applications | ❓ |
Avoid hard-coded parameters | Parameters from script Parameters from file |
Minimise code duplication | Code organisation |
Clarity | |
Comment sufficiently | Docstrings |
Ensure clarity and consistency in the model results tables | Performance measures |
Include run instructions | Documentation |
State run times and machine specifications | Documentation |
Functionality | |
Optimise model run time | Parallel processing |
Save outputs to a file | Producing tables and figures |
Avoid excessive output files | ❓ |
Address large file sizes | ❓ |
2 NHS ‘Levels of RAP’ Maturity Framework
Show/Hide Framework
The following framework has been directly copied from the RAP Community of Practice repository/website: NHS RAP Levels of RAP Framework.
This framework is maintained by the NHS RAP Community of Practice and is © 2024 Crown Copyright (NHS England), shared by them under the terms of the Open Government 3.0 licence.
The specific version of the framework copied below is that from commit 2549256 (9th September 2024).
2.1 🥉 Baseline
RAP fundamentals offering resilience against future change.
Criteria | Chapter |
---|---|
Data produced by code in an open-source language (e.g., Python, R, SQL). | Open-source languages |
Code is version controlled (see Git basics and using Git collaboratively guides). | Version control |
Repository includes a README.md file (or equivalent) that clearly details steps a user must follow to reproduce the code (use NHS Open Source Policy section on Readmes as a guide). | Documentation |
Code has been peer reviewed. | Peer review |
Code is published in the open and linked to & from accompanying publication (if relevant). | Sharing and archiving |
2.2 🥈 Silver
Implementing best practice by following good analytical and software engineering standards.
Meeting all of the above requirements, plus:
Criteria | Chapter |
---|---|
Outputs are produced by code with minimal manual intervention. | Full run |
Code is well-documented including user guidance, explanation of code structure & methodology and docstrings for functions. | Documentation Docstrings |
Code is well-organised following standard directory format. | Structuring as a package |
Reusable functions and/or classes are used where appropriate. | Code organisation |
Code adheres to agreed coding standards (e.g PEP8, style guide for Pyspark). | Linting |
Pipeline includes a testing framework (unit tests, back tests). | Tests |
Repository includes dependency information (e.g. requirements.txt, PipFile, environment.yml). | Dependency management |
Logs are automatically recorded by the pipeline to ensure outputs are as expected. | Logging |
Data is handled and output in a Tidy data format. | Parameters from file Performance measures |
2.3 🥇 Gold
Analysis as a product to further elevate your analytical work and enhance its reusability to the public.
Meeting all of the above requirements, plus:
Criteria | Chapter |
---|---|
Code is fully packaged. | Structuring as a package |
Repository automatically runs tests etc. via CI/CD or a different integration/deployment tool e.g. GitHub Actions. | GitHub actions |
Process runs based on event-based triggers (e.g., new data in database) or on a schedule. | N/A |
Changes to the RAP are clearly signposted. E.g. a changelog in the package, releases etc. (See gov.uk info on Semantic Versioning). | Changelog |