Reporting guidelines
This page evaluates the extent to which the journal article meets the criteria from two discrete-event simulation study reporting guidelines:
- Monks et al. (2019) - STRESS-DES: Strengthening The Reporting of Empirical Simulation Studies (Discrete-Event Simulation) (Version 1.0).
- Zhang, Lhachimi, and Rogowski (2020) - The generic reporting checklist for healthcare-related discrete event simulation studies derived from the the International Society for Pharmacoeconomics and Outcomes Research Society for Medical Decision Making (ISPOR-SDM) Modeling Good Research Practices Task Force reports.
STRESS-DES
Of the 24 items in the checklist:
- 16 were met fully (✅)
- 1 were partially met (🟡)
- 2 were not met (❌)
- 5 were not applicable (N/A)
Item | Recommendation | Met by study? | Evidence |
---|---|---|---|
Objectives | |||
1.1 Purpose of the model | Explain the background and objectives for the model | ✅ Fully | 1 Introduction e.g. “The objective of this study was to determine the cost effectiveness of primary care-based case detection scenarios, which vary in terms of their ini- tial eligibility criteria, case detection technology, follow-up intervals, and subsequent disease management.”Johnson et al. (2021) |
1.2 Model outputs | Define all quantitative performance measures that are reported, using equations where necessary. Specify how and when they are calculated during the model run along with how any measures of error such as confidence intervals are calculated. | ✅ Fully | 2.2 Evaluation Platform in COPD (EPIC) lists outcomes modelled by EPIC which include demographics and risk factors, COPD occurence, lung function trajectories, COPD severity, COPD exacerbation occurence and severity, mortality related to COPD exacerbations, background mortality, and costs and quality-adjusted life years (QALYs)2.5.1 Reference Case - the focus of this paper in terms of outcomes: “The main outcomes of this analysis were total costs and QALYs accumulated over a 20-year time horizon”Johnson et al. (2021) |
1.3 Experimentation aims | If the model has been used for experimentation, state the objectives that it was used to investigate. (A) Scenario based analysis – Provide a name and description for each scenario, providing a rationale for the choice of scenarios and ensure that item 2.3 (below) is completed. (B) Design of experiments – Provide details of the overall design of the experiments with reference to performance measures and their parameters (provide further details in data below). (C) Simulation Optimisation – (if appropriate) Provide full details of what is to be optimised, the parameters that were included and the algorithm(s) that was be used. Where possible provide a citation of the algorithm(s). |
✅ Fully | Scenario based - have (a) 2.5.1 Reference Case - with “16 case detection scenarios” which can see in Table 1 - and then (b) 2.5.2 Sensitivity Analyses which explore outcomes of those scenarios with varying parameters e.g. medication adherence, smoking cessation.Johnson et al. (2021) |
Logic | |||
2.1 Base model overview diagram | Describe the base model using appropriate diagrams and description. This could include one or more process flow, activity cycle or equivalent diagrams sufficient to describe the model to readers. Avoid complicated diagrams in the main text. The goal is to describe the breadth and depth of the model with respect to the system being studied. | ✅ Fully | Figure 1 : “Schematic illustration of the Evaluation Platforms in COPD (EPIC). Modules added to the original version of EPIC in order to simulate the case detection pathway are shown in grey”Johnson et al. (2021) |
2.2 Base model logic | Give details of the base model logic. Give additional model logic details sufficient to communicate to the reader how the model works. | ✅ Fully | Figure 1 , 2.1 Case Detection Scenarios , 2.2 Evaluation Platform in COPD (EPIC) , 2.3 Evaluation of Case Detection Scenarios Johnson et al. (2021) |
2.3 Scenario logic | Give details of the logical difference between the base case model and scenarios (if any). This could be incorporated as text or where differences are substantial could be incorporated in the same manner as 2.2. | ✅ Fully | Table 1 : Case detection scenarios evaluated, and 2.1 Case Detection Scenarios Johnson et al. (2021) |
2.4 Algorithms | Provide further detail on any algorithms in the model that (for example) mimic complex or manual processes in the real world (i.e. scheduling of arrivals/ appointments/ operations/ maintenance, operation of a conveyor system, machine breakdowns, etc.). Sufficient detail should be included (or referred to in other published work) for the algorithms to be reproducible. Pseudo-code may be used to describe an algorithm. | ✅ Fully | Provided in prior paper Sadatsafavi et al. (2019) - Table 1 : “Input Parameters (βs) and Their Related Equations in Evaluation Platform in COPD (EPIC)” - and elsewhere in text e.g. “new individuals (incident population) enter the model in future years according to the projection of population growth and aging”, and “Time of events was generally sampled from the exponential distribution whose event rates were functions of an individual’s variables.” |
2.5.1 Components - entities | Give details of all entities within the simulation including a description of their role in the model and a description of all their attributes. | ✅ Fully | Provided in prior paper Sadatsafavi et al. (2019) - Methods: Conceptual Framework - “An individual is defined by a set of “variables” (such as sex, current age, current FEV1, individual-specific rate of FEV1 decline, and rate of exacerbations)” |
2.5.2 Components - activities | Describe the activities that entities engage in within the model. Provide details of entity routing into and out of the activity. | ✅ Fully | Provided clearly in prior paper Sadatsafavi et al. (2019) in Appendix 1 which is a table listing the events in the model, what they are affected by, and the consequence of each event. These are likewise listed in Johnson et al. (2021) 2.2 Evaluation Platform in COPD (EPIC) - “COPD occurrence, exacerbation start and end, change in smoking status, and death due to background mortality are modelled as events in EPIC.” - along with additional events for this paper - “In order to simulate the case detection pathway, we added modules for symptoms, primary care visits, and COPD diagnosis to the original model.” |
2.5.3 Components - resources | List all the resources included within the model and which activities make use of them. | N/A | There doesn’t appear to be any resource use within the DES - it is not mentioned, and the lists events wouldn’t appear to require it. However, they do apply costs related to resource use - e.g. 2.4.2.1 Case Detection and Diagnosis Costs lists costs for administering case detection, screening spirometry, outpatient diagnosis, inpatient diagnosis in Johnson et al. (2021) - though looking at Table 1 in Sadatsafavi et al. (2019), costs are calculated based on COPD severity - hence it appears the resource use isn’t a “component” in the model. |
2.5.4 Components - queues | Give details of the assumed queuing discipline used in the model (e.g. First in First Out, Last in First Out, prioritisation, etc.). Where one or more queues have a different discipline from the rest, provide a list of queues, indicating the queuing discipline used for each. If reneging, balking or jockeying occur, etc., provide details of the rules. Detail any delays or capacity constraints on the queues. | N/A | Doesn’t appear to be any queues. |
2.5.5 Components - entry/exit points | Give details of the model boundaries i.e. all arrival and exit points of entities. Detail the arrival mechanism (e.g. ‘thinning’ to mimic a non-homogenous Poisson process or balking) | ✅ Fully | Entry/exit points illustrated in Figure 1 - “generate random patient representing canadians >= 40 years of age” - and then, at the end, if “mortality due to the exacerbation” then “remove the individual” - Johnson et al. (2021) - or, as in Sadatsafavi et al. (2019), ends either at death “or end of the simulation time horizon”. Regarding arrivals, patients were either in pre-existing population or “new individuals (incident population) enter the model in future years according to the projection of population growth and aging” |
Data | |||
3.1 Data sources | List and detail all data sources. Sources may include: • Interviews with stakeholders, • Samples of routinely collected data, • Prospectively collected samples for the purpose of the simulation study, • Public domain data published in either academic or organisational literature. Provide, where possible, the link and DOI to the data or reference to published literature. All data source descriptions should include details of the sample size, sample date ranges and use within the study. |
✅ Fully | Lots of data sources mentioned throughout Johnson et al. (2021) - e.g. “used data from the Canadian Cohort of Obstructive Lung Disease (CanCOLD) study [26] to develop risk equations for the annual occurrence of cough, phlegm, wheeze, dyspnea…”. Also, in the prior paper Sadatsafavi et al. (2019), Table 1 lists input parameters and has a column with the source/reference for each. |
3.2 Pre-processing | Provide details of any data manipulation that has taken place before its use in the simulation, e.g. interpolation to account for missing data or the removal of outliers. | N/A | Couldn’t spot any mention of this, and hence assuming there was none. |
3.3 Input parameters | List all input variables in the model. Provide a description of their use and include parameter values. For stochastic inputs provide details of any continuous, discrete or empirical distributions used along with all associated parameters. Give details of all time dependent parameters and correlation. Clearly state: • Base case data • Data use in experimentation, where different from the base case. • Where optimisation or design of experiments has been used, state the range of values that parameters can take. • Where theoretical distributions are used, state how these were selected and prioritised above other candidate distributions. |
✅ Fully | Johnson et al. (2021) has a section 2.4 Parameter Inputs which describes these in categories of 2.4.1 Treatment Effectiveness , 2.4.2 Costs , 2.4.3 Health Status . Sadatsafavi et al. (2019) has Table 1 which is a table of input parameters. |
3.4 Assumptions | Where data or knowledge of the real system is unavailable what assumptions are included in the model? This might include parameter values, distributions or routing logic within the model. | ✅ Fully | A few assumptions related to input parameters are mentioned in Johnson et al. (2021) - 2.4.2.1 Case Detection and Diagnosis Costs - “we assumed case detection comprised the major topic of a primary care visit” - and 2.4.3 Health Status - “we assumed a temporary false positive diagnosis of COPD was not associated with a disutility”. A few more are then also mentioned in Sadatsafavi et al. (2019) - e.g. “steady-state assumption: that COPD incidence is such that COPD prevalence remains stable over time within the strata of risk factors” |
Experimentation | |||
4.1 Initialisation | Report if the system modelled is terminating or non-terminating. State if a warm-up period has been used, its length and the analysis method used to select it. For terminating systems state the stopping condition. State what if any initial model conditions have been included, e.g., pre-loaded queues and activities. Report whether initialisation of these variables is deterministic or stochastic. |
✅ Fully | Although this terminology is not used, the description of the model provides this information - non-terminiating with initial model conditions - e.g. original paper Sadatsafavi et al. (2019) Appendix 2 : “starts with a cross-section of the Canadian population 20 years or older in 2001… It reflects the population based of the 2001 Canadian Community Helath Survey… Once the start-up population is established, inidividual’s risk factors are dynamically updated…” |
4.2 Run length | Detail the run length of the simulation model and time units. | ✅ Fully | Run length in Johnson et al. (2021) Table 2 - “Time horizon 20 years” - and the only mentioned time unit is “years” (with no mention of weeks, days, hours). |
4.3 Estimation approach | State the method used to account for the stochasticity: For example, two common methods are multiple replications or batch means. Where multiple replications have been used, state the number of replications and for batch means, indicate the batch length and whether the batch means procedure is standard, spaced or overlapping. For both procedures provide a justification for the methods used and the number of replications/size of batches. | N/A | Johnson et al. (2021) that they state “EPIC is not probabilistic, meaning we could not explore parameter uncertainty”… and in Sadatsafavi et al. (2019) “EPIC does not currently incorporate uncertainty in the underlying evidence, which is a goal for the next iteration of this platform. Given that many parameters of EPIC are derived based on calibration techniques, this is not simply a matter of replacing fixed input parameters with their probabilistic equivalent”. |
Implementation | |||
5.1 Software or programming language | State the operating system and version and build number. State the name, version and build number of commercial or open source DES software that the model is implemented in. State the name and version of general-purpose programming languages used (e.g. Python 3.5). Where frameworks and libraries have been used provide all details including version numbers. |
🟡 Partially | Johnson et al. (2021) Code availability - “can be run as an R package”. Sadatsafavi et al. (2019) Simulation Platform “EPIC was programmed in C++… an identical open-source and open-access model was developed with an interface in R (v3.5.1)” |
5.2 Random sampling | State the algorithm used to generate random samples in the software/programming language used e.g. Mersenne Twister. If common random numbers are used, state how seeds (or random number streams) are distributed among sampling processes. |
N/A | Johnson et al. (2021) that they state “EPIC is not probabilistic, meaning we could not explore parameter uncertainty”… and in Sadatsafavi et al. (2019) “EPIC does not currently incorporate uncertainty in the underlying evidence, which is a goal for the next iteration of this platform. Given that many parameters of EPIC are derived based on calibration techniques, this is not simply a matter of replacing fixed input parameters with their probabilistic equivalent”. |
5.3 Model execution | State the event processing mechanism used e.g. three phase, event, activity, process interaction. Note that in some commercial software the event processing mechanism may not be published. In these cases authors should adhere to item 5.1 software recommendations. State all priority rules included if entities/activities compete for resources. If the model is parallel, distributed and/or use grid or cloud computing, etc., state and preferably reference the technology used. For parallel and distributed simulations the time management algorithms used. If the HLA is used then state the version of the standard, which run-time infrastructure (and version), and any supporting documents (FOMs, etc.) |
❌ Not met | Couldn’t spot this anywhere |
5.4 System specification | State the model run time and specification of hardware used. This is particularly important for large scale models that require substantial computing power. For parallel, distributed and/or use grid or cloud computing, etc. state the details of all systems used in the implementation (processors, network, etc.) | ❌ Not met | Couldn’t spot this anywhere |
Code access | |||
6.1 Computer model sharing statement | Describe how someone could obtain the model described in the paper, the simulation software and any other associated software (or hardware) needed to reproduce the results. Provide, where possible, the link and DOIs to these. | ✅ Fully | Code availability “EPIC is publicly available and can be run as an R package (http://resp.core.ubc.ca/software/epicR). Code for replicating these results is available at http://resp.core.ubc.ca/software/caseDetection.” |
DES checklist derived from ISPOR-SDM
Of the 18 items in the checklist:
- 15 were met fully (✅)
- 2 were not met (❌)
- 1 was not applicable (N/A)
Item | Assessed if… | Met by study? | Evidence/location |
---|---|---|---|
Model conceptualisation | |||
1 Is the focused health-related decision problem clarified? | …the decision problem under investigation was defined. DES studies included different types of decision problems, eg, those listed in previously developed taxonomies. | ✅ Fully | 1 Introduction - primary care-based case detection - e.g. “estimates from Canada indicate that 70% of patients with COPD are currently undiagnosed [5, 6], and up to one-third of COPD patients are initially diagnosed in hospital following an exacerbation” Johnson et al. (2021) |
2 Is the modeled healthcare setting/health condition clarified? | …the physical context/scope (eg, a certain healthcare unit or a broader system) or disease spectrum simulated was described. | ✅ Fully | 1 Introduction - aims to “project the outcomes of COPD-related policis for the general Canadian population”, who are described e.g. “estimates from Canada indicate that 70% of patients with COPD are currently undiagnosed [5, 6], and up to one-third of COPD patients are initially diagnosed in hospital following an exacerbation” Johnson et al. (2021) |
3 Is the model structure described? | …the model’s conceptual structure was described in the form of either graphical or text presentation. | ✅ Fully | Figure 1 , 2.1 Case Detection Scenarios , 2.2 Evaluation Platform in COPD (EPIC) , 2.3 Evaluation of Case Detection Scenarios Johnson et al. (2021) |
4 Is the time horizon given? | …the time period covered by the simulation was reported. | ✅ Fully | Run length in Johnson et al. (2021) Table 2 - “Time horizon 20 years”. |
5 Are all simulated strategies/scenarios specified? | …the comparators under test were described in terms of their components, corresponding variations, etc | ✅ Fully | Scenario based - have (a) 2.5.1 Reference Case - with “16 case detection scenarios” which can see in Table 1 - and then (b) 2.5.2 Sensitivity Analyses which explore outcomes of those scenarios with varying parameters e.g. medication adherence, smoking cessation.Johnson et al. (2021) |
6 Is the target population described? | …the entities simulated and their main attributes were characterized. | ✅ Fully | Johnson et al. (2021) 1 Introduction has some general information a.g. “estimates from Canada indicate that 70% of patients with COPD are currently undiagnosed [5, 6], and up to one-third of COPD patients are initially diagnosed in hospital following an exacerbation” - then 2.2 Evaluation Platform in COPD (EPIC) states that it is focused on “general population of Canadians aged >=40 years”, with description of population provided - e.g. Sadatsafavi et al. (2019) Appendix 2: Summary of the demographics and risk factor module |
Paramaterisation and uncertainty assessment | |||
7 Are data sources informing parameter estimations provided? | …the sources of all data used to inform model inputs were reported. | ✅ Fully | Lots of data sources mentioned throughout Johnson et al. (2021) - e.g. “used data from the Canadian Cohort of Obstructive Lung Disease (CanCOLD) study [26] to develop risk equations for the annual occurrence of cough, phlegm, wheeze, dyspnea…”. Also, in the prior paper Sadatsafavi et al. (2019), Table 1 lists input parameters and has a column with the source/reference for each. |
8 Are the parameters used to populate model frameworks specified? | …all relevant parameters fed into model frameworks were disclosed. | ✅ Fully | Johnson et al. (2021) has a section 2.4 Parameter Inputs which describes these in categories of 2.4.1 Treatment Effectiveness , 2.4.2 Costs , 2.4.3 Health Status . Sadatsafavi et al. (2019) has Table 1 which is a table of input parameters. |
9 Are model uncertainties discussed? | …the uncertainty surrounding parameter estimations and adopted statistical methods (eg, 95% confidence intervals or possibility distributions) were reported. | N/A | Johnson et al. (2021) 4.4 Strengths and Limitations : “EPIC is not probabilistic, meaning we could not explore parameter uncertainty” |
10 Are sensitivity analyses performed and reported? | …the robustness of model outputs to input uncertainties was examined, for example via deterministic (based on parameters’ plausible ranges) or probabilistic (based on a priori-defined probability distributions) sensitivity analyses, or both. | ✅ Fully | As reported in 3.2 Sensitivity Analyses Johnson et al. (2021) |
Validation | |||
11 Is face validity evaluated and reported? | …it was reported that the model was subjected to the examination on how well model designs correspond to the reality and intuitions. It was assumed that this type of validation should be conducted by external evaluators with no stake in the study. | ✅ Fully | Prior paper Sadatsafavi et al. (2019) - Methods: Examining Face Validity and Internal Validity - “Two important sets of face validity targets were the gradient of outcomes across categories of risk factors (e.g., higher COPD prevalence among ever-smokers than never-smokers) and the stability of outcomes over time within risk factor strata. Face validity results are not reported here but are available upon request.”Although the results are not provided, I think this still meets criteria, as it was reported that it was subjected to examination. |
12 Is cross validation performed and reported | …comparison across similar modeling studies which deal with the same decision problem was undertaken. | ✅ Fully | Johnson et al. (2021) 4.2 Comparison with Previous Research |
13 Is external validation performed and reported? | …the modeler(s) examined how well the model’s results match the empirical data of an actual event modeled. | ✅ Fully | Prior paper Sadatsafavi et al. (2019) - Methods: Examining External Validity - “The external validity tests were based on the reported outcomes from the placebo arms of 2 independent clinical trials…” and Results: External validation . |
14 Is predictive validation performed or attempted? | …the modeler(s) examined the consistency of a model’s predictions of a future event and the actual outcomes in the future. If this was not undertaken, it was assessed whether the reasons were discussed. | ❌ Not met | Sadatsafavi et al. (2019) Abstract “Predictive validity of EPIC needs to be examined prospectively against future empirical studies.” |
Generalisability and stakeholder involvement | |||
15 Is the model generalizability issue discussed? | …the modeler(s) discussed the potential of the resulting model for being applicable to other settings/populations (single/multiple application). | ✅ Fully | Johnson et al. (2021) 4.4 Strengths and Limitations - “Given that the performance of case detection strate- gies is highly dependent on local settings and subsequent management decisions, the use of ‘whole disease’ platforms such as EPIC can result in more consistent evaluations. EPIC is open-source and code for reproducing this analysis is publicly available, meaning it can be easily updated to account for changes to ‘downstream’ technologies such as treatment.”Sadatsafavi et al. (2019) Discussion “While EPIC is developed for the Canadian context, many of its components pertain to the natural history and biology of COPD and are independent of a particular setting. Other components can be updated (mainly by changing numerical values of input parameters rather than modifying the structure of equations) with the specifics of a health care setting. As such, this platform can have applicability for other jurisdictions.” |
16 Are decision makers or other stakeholders involved in modeling? | …the modeler(s) reported in which part throughout the modeling process decision makers and other stakeholders (eg, subject experts) were engaged. | ❌ Not met | Couldn’t spot any mention of this. |
17 Is the source of funding stated? | …the sponsorship of the study was indicated. | ✅ Fully | Funding : “Financial support for this study was provided by the Canadian Lung Association Breathing as One Studentship Award…”Johnson et al. (2021) |
18 Are model limitations discussed? | …limitations of the assessed model, especially limitations of interest to decision makers, were discussed. | ✅ Fully | 2.2 Strengths and Limitations - “his study has several limitations. First, inhaled therapies are known to indirectly improve…”Johnson et al. (2021) |
References
Johnson, Kate M., Mohsen Sadatsafavi, Amin Adibi, Larry Lynd, Mark Harrison, Hamid Tavakoli, Don D. Sin, and Stirling Bryan. 2021. “Cost Effectiveness of Case Detection Strategies for the Early Detection of COPD.” Applied Health Economics and Health Policy 19 (2): 203–15. https://doi.org/10.1007/s40258-020-00616-2.
Monks, Thomas, Christine S. M. Currie, Bhakti Stephan Onggo, Stewart Robinson, Martin Kunc, and Simon J. E. Taylor. 2019. “Strengthening the Reporting of Empirical Simulation Studies: Introducing the STRESS Guidelines.” Journal of Simulation 13 (1): 55–67. https://doi.org/10.1080/17477778.2018.1442155.
Sadatsafavi, Mohsen, Shahzad Ghanbarian, Amin Adibi, Kate Johnson, J. Mark FitzGerald, William Flanagan, Stirling Bryan, and Don Sin. 2019. “Development and Validation of the Evaluation Platform in COPD (EPIC): A Population-Based Outcomes Model of COPD for Canada.” Medical Decision Making 39 (2): 152–67. https://doi.org/10.1177/0272989X18824098.
Zhang, Xiange, Stefan K. Lhachimi, and Wolf H. Rogowski. 2020. “Reporting Quality of Discrete Event Simulations in Healthcare—Results From a Generic Reporting Checklist.” Value in Health 23 (4): 506–14. https://doi.org/10.1016/j.jval.2020.01.005.