5  Reflections from reproductions

This page describes the facilitators for each reproduction (and, conversely, the barriers to the reproduction, in their absence). With each section, I have created a table which evaluates whether the facilitators were fully met (✅), partially met (🟡), not met (❌) or not applicable (N/A) for each study.

Links to each study:

5.1 Environment

Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
🟡

Shoaib and Ramamohan (2021): Not met. This became a time-consuming issue as it took a while to identify a dependency that was needed for the code to work (greenlet) (based on reading the documentation for salabim), and a while longer to realise I had installed another package when the package I needed was in base (statistics).

Huang et al. (2019): Not met. However, this was fairly easily resolved based on imports to .R script, and then on extra imports suggested by RStudio when I tried and failed to run the script.

Lim et al. (2020): Partially met. The only packages needed (numpy and pandas) are mentioned in the paper (although only listed as imports in the script).

Kim et al. (2021): Fully met. Provides commands to install packages required at the start of scripts, which I could then easily base renv on automatically (as it detects them).

Anagnostou et al. (2022): Fully met. Provides requirements.txt

Johnson et al. (2021):

Reflections:

  • The import statements can be sufficient in indicating all the packages required but this is not always the case if there are “hidden”/unmentioned dependencies that don’t get imported
  • There are various options for listing the packages (e.g. comprehensive import statements, installation lines in the script, environment files).
  • This was a common issue.
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
🟡 🟡 🟡

Shoaib and Ramamohan (2021): Not met. I had to backdate the package versions as the model didn’t work with the latest.

Huang et al. (2019): Not met. I initially tried to create an environment with R and package versions that were prior to the article publication date. However, I had great difficulties implementing this with R, and never managed to successfully do this. This was related to:

  • The difficulty of switching between R versions
  • Problems in finding available/a source for specific package versions for specific versions or R

Lim et al. (2020): Partially met. Provides major Python version, but chose minor and the package versions based on article publication date.

Kim et al. (2021): Partially met. States version of R but not package. Due to prior issues with backdating R, used latest versions. There were no issues using the latest versions of R and packages, but if there had been, it would be important to know what versions had previously been used and worked.

Anagnostou et al. (2022): Partially met (depending on how strict you are being). The Python version was stated in the paper, and the simpy version was stated in the complementary app repository (although neither were mentioned in the model repository itself).

Johnson et al. (2021):

Reflections:

  • Models will sometimes work with the latest versions of packages, but likewise, you will sometimes need to backdate as it no longer works with the latest
  • For Python, it was very easy to “backdate” the python and package versions. However, I found this very difficult to in R, and ended up always using the latest versions.
  • Versions are sometimes provided elsewhere (e.g. in paper, in other repositories), but would be handy to be in model repository itself
  • This was a very common issue.

5.2 Structure of code and scripts

Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)

Shoaib and Ramamohan (2021): Fully met. Provided as a single .py file which ran model with function main().

Huang et al. (2019): Not met. The model code was provided within the code for a web application, but the paper was not focused on this application, and instead on specific model scenarios. I had to extract the model code and transform it into a format that was “runnable” as an R script/notebook.

Lim et al. (2020): Fully met. Provided as a single .py file which ran the model with a for loop.

Kim et al. (2021): Fully met. Has seperate .R scripts for each scenario which ran the model by calling functions from elsewhere in repository.

Anagnostou et al. (2022): Fully met. Can run model from command line.

Johnson et al. (2021):

Reflections:

  • If you are presenting the results of a model, then provide the code for that model in a “runnable” format.
  • This was an uncommon issue.
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)

Shoaib and Ramamohan (2021): Not met. The model is set up as classes and run using a function. However, it is not designed to allow any variation in inputs. Everything uses default inputs, and it designed in such a way that - if you wish to vary model parameters - you need to directly change these in the script itself. This meant you couldn’t run multiple versions of the model using the same script. It also meant hidden errors were more likely (e.g. if miss changing a parameter somewhere, or input the wrong parameters and don’t realise).

Huang et al. (2019): Fully met. Model was set up as a function, with many of the required parameters already set as “changeable” inputs to that function.

Lim et al. (2020): Fully met. The model is created from a series of functions and run with a for loop that iterates through different parameters. As such, the model is able to be run programmatically (within that for loop, which varied e.g. staff per shift and so on and re-ran the model).

Kim et al. (2021): Fully met. Each scenario is an R script which states different parameters and then calls functions to run model.

Anagnostou et al. (2022): Fully met. Change inputs in input .csv files.

Johnson et al. (2021):

Reflections:

  • Design model so that you can re-run it with different parameters without needing to make changes to the model code itself.
    • This allows you to run multiple versions of the model with the same script.
    • It also reduces the likelihood of missing errors (e.g. if miss changing an input parameter somewhere, or input the wrong parameters and don’t realise).
  • This was an uncommon issue.
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
🟡 N/A

Shoaib and Ramamohan (2021): Not met. Although some parameters sat “outside” of the model within the main() function (and hence were more “changeable”, even if not “changeable” inputs to that function, but changed directly in script). However, many of the other parameters were hard-coded within the model itself. It took time to spot where these were and correctly adjust them to be modifiable inputs.

Huang et al. (2019): Partially met. Pretty much all of the parameters that we wanted to change were not hard coded and were instead inputs to the model function simulate_nav(). However, I did need to add an exclusive_use scenario which conditionally changed ir_resources, but that is the only exception. I also add ed_triage as a changeable input but didn’t end up needing that to reproduce any results (was just part of troubleshooting). I also

Lim et al. (2020): Not met. Some parameters were not hard coded within the model, but lots of them were not.

Kim et al. (2021): Fully met. All model parameters could be varied from “outside” the model code itself, as they were provided as changeable inputs to the model.

Anagnostou et al. (2022): N/A as no scenarios.

Johnson et al. (2021):

Reflections:

  • It can be quite difficult to change parameters that are hard coded into the model. Ideally, all the parameters that a user might want to change should be easily changeable and not hard coded.
  • This is a somewhat common issue.
  • There is overlap between this and whether the code for scenarios is provided (as typically, the code for scenario is conditionally changing parameter values, although this can be facilitated by not hard coding the parameters, so you call need to change the values from “outside” the model code, rather than making changes to the model functions themselves). Hence, have included as two seperate reflections.
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)

Shoaib and Ramamohan (2021): Not met. The model often contained very similar blocks of code before or after warm-up. This has the potential to introduce mistakes - with a suspected (although unconfirmed) mistake being that the lower boundary for the doctor consultation times in configuration 1 differed before and after warm-up.

Huang et al. (2019): Fully met.

Lim et al. (2020): Fully met.

Kim et al. (2021): Not met. There was alot of duplication when running each scenario (e.g. repeated calls to Eventsandcosts, and repeatedly defining the same parameters). This meant, if changing a parameter that you want to be consistent between all the scripts (e.g. number of persons), you had to change each of the scripts one by one.

Johnson et al. (2021): Fully met.

Johnson et al. (2021):

Reflections: Large amounts of code duplication are non-ideal as they can:

  • Make code less readable
  • Make it trickier to change universal parameters
  • Increase the likelihood of introducing mistakes
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
🟡 🟡

Shoaib and Ramamohan (2021) and Huang et al. (2019): Not met. Would have benefitted from more comments, as it took some time to ensure I have correctly understood code, particularly if they used lots of abbreviations.

Lim et al. (2020): Fully met. There were lots of comments in the code (including doc-string-style comments at the start of functions) that aided understanding of how it worked.

Kim et al. (2021): Partially met. Didn’t have any particular issues in working out the code. There are sufficient comments in the scenario scripts and at the start of the model scripts, although within the model scripts, there were sometimes quite dense sections of code that would likely benefit from some additional comments.

Anagnostou et al. (2022): Partially met. Didn’t have to delve into the code much, so can’t speak from experience as to whether the comments were sufficient. From looking through the model code, several scripts have lots of comments and docstrings for each function, but some do not.

Johnson et al. (2021):

Reflections:

  • With increasing code complexity, the inclusion of sufficient comments becomes increasingly important, as it can otherwise be quite time consuming to figure out how to fix and change sections of code
  • Define abbreviations used within the code
  • Good to have consistent comments and docstrings throughout (i.e. on all scripts, on not just some of them)

5.3 Run time and memory usage

I have not evaluated like as a criteria, as a long run time is not inherently a bad thing. However, I definitely found that the run time of models had a big impact on how easy it was to reproduce results as longer run times meant it was tricky (or even impossible) to run in the first place, or tricky to re-run.

The studies where I made adjustments were:

  • Shoaib and Ramamohan (2021): Add parallel processing and ran fewer replications
  • Huang et al. (2019): No changes made.
  • Lim et al. (2020): Add parallel processing
  • Kim et al. (2021): Reduced number of people in simulation, and switched from serial to the provided parallel option.
  • Anagnostou et al. (2022): Model was super quick which made it really easy to run and re-run each time
  • Johnson et al. (2021):

For Kim et al. (2021), an error appears to have been introduced with the aoorta diameter thresholds by switching between nested and unnested lists, which I’m anticipating was unresolved due to the long run times of the model meaning they weren’t all run in sequence at the end.

Reflections:

  • Reduce model run time if possible as it makes it easier to work with, and facilitates doing full re-runs of all scenarios (which can be important with code changes, etc).
    • Relatedly, it is good practice to re-run all scripts before finishing up, as then you can spot any errors like the one mentioned for Kim et al. (2021)
  • Common issue (to varying degrees - i.e. taking 20 minutes, up to taking several hours).
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
🟡 N/A

Shoaib and Ramamohan (2021): Partially met. Run time stated in paper but not repository.

Huang et al. (2019): Not met.

Lim et al. (2020): Not met.

Kim et al. (2021): Not met. A prior paper describing the model development mentions the run time, but not the current paper or repository.

Anagnostou et al. (2022): Not applicable. Very quick! Seconds! So not particularly relevant - although, you could argue, potentially still important if there were some error that made it look like the model were running continuously (e.g. stuck in a loop) - although this is relatively unlikely.

Johnson et al. (2021):

Reflections:

  • For long models with no statement, it can take a while to realise that it’s not an error in the code or anything, but actually just a long run time! And hard to know how long to expect, and whether it is without the capacities of your machine and so on.
  • Common issue.
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
N/A N/A N/A N/A

Shoaib and Ramamohan (2021), Huang et al. (2019), and Lim et al. (2020): Not applicable. Didn’t find it to be too computationally expensive for my machine.

Kim et al. (2021): Not met. Unable to run on my machine (serial took too long to run (would have to leave laptop on for many many hours which isn’t feasible), and parallel was too computationally expensive and crashed the machine (with the original number of people)). This is not mentioned in the repository or paper, but only referred to in a prior publication. Would’ve been handy if it included suggestions like reducing number of people and so on (which is what I had to do to feasibly run it).

Anagnostou et al. (2022): Not applicable. Runs in seconds.

Reflections:

  • Some models are so computationally expensive that it may be simply impossible to run it a feasible length of time without a high powered machine.
  • If a model is computationally expensive, it would be good to provide suggested alternatives that allow it to be run on lower spec machines
  • Not a common problem - only relevant to computationally expensive models

5.4 Parameters, scenarios and outputs

Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
N/A

Shoaib and Ramamohan (2021): Not met. There were several instances where it took quite a while to understand how and where to modify the code in order to run scenarios (e.g. no arrivals, transferring admin work, reducing doctor intervention in deliveries).

Huang et al. (2019): Not met. Set up a notebook to programmatically run the model scenarios. It took alot of work to modify and write code that could run the scenarios, and I often made mistakes in my interpretation for the implementation of scenarios, which could be avoided if code for those scenarios was provided.

Lim et al. (2020): Not met. Several parameters or scenarios were not incorporated in the code, and had to be added (e.g. with conditional logic to skip or change code run, removing hard-coding, adding parameters to existing).

Kim et al. (2021): Not met. Took alot of work to change model from for loop to function, to set all parameters as inputs (some were hard coded), and add conditional logic of scenarios when required.

Anagnostou et al. (2022): Not applicable. No scenarios.

Johnson et al. (2021):

Reflections:

  • Common issue
  • Time consuming and tricky to resolve
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)

Shoaib and Ramamohan (2021): Not met. Had to add some outputs and calculations (e.g. proportion of childbirth cases referred, standard deviation)

Huang et al. (2019): Not met. It has a complicated output (standardised density of patient in queue) that I was never certain on whether I correctly calculated. Although it outputs the columns required to calculate it, due its complexity, I feel this was not met, as it feels like a whole new output in its own right (and not just something simple like a mean).

Lim et al. (2020): Not met. The model script provided was only set up to provide results from days 7, 14 and 21. The figures require daily results, so I needed to modify the code to output that.

Kim et al. (2021): Not met. Had to write code to find aorta sizes of people with AAA-related deaths.

Anagnostou et al. (2022): Fully met. Although worth noting this only had one scenario/version of model and one output to reproduce.

Johnson et al. (2021):

Reflections:

  • Calculate and provide all the outputs required
  • Appreicate this can be a bit “ambiguous” (e.g. if its just plotting a mean or simple calculation, then didn’t consider that here) (however, combined with other criteria, we do want them to provide code to calculate outputs, so we would want them to provide that anyway)
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
🟡 🟡

Shoaib and Ramamohan (2021): Partially met. Script is set with parameters for base configuration 1, with the exception of number of replications.

Huang et al. (2019): Not met. The baseline model in the script did not match the baseline model (or any scenario) in the paper, so had to modify parameters.

Lim et al. (2020): Partially met. The included parameters were corrected, but the baseline scenario included varying staff strength to 2, and the provided code only varied 4 and 6. I had to add some code that enabled it to run with staff strength 2 (as there were an error that occured if you tried to set that).

Kim et al. (2021): Fully met.

Anagnostou et al. (2022): Fully met.

Johnson et al. (2021):

Reflections:

  • At least provide a script that can run the baseline model as in the paper (even if not providing the scenarios)
  • This can introduce difficulties - when some parameters are wrong, you rely on the paper to check which parameters are correct or not, but if the paper doesn’t mention every single parameter (which is reasonably likely, as this includes those not varied by scenarios), then you aren’t able to be sure that the model you are running is correct.
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)

Shoaib and Ramamohan (2021): Some parameters that could not be calculated were not provided - ie. what consultation boundaries to use when mean length of doctor consultation was 2.5 minutes

Huang et al. (2019): Not met. In this case, patient arrivals and resource numbers were listed in the paper, and there were several discprepancies between this and the provided code. However, for many of the model parameters like length of appointment, these were not mentioned in the paper, and so it was not possible to confirm whether or not those were correct. Hence, marked as not met, as the presence of discrepenancies for several other parameters puts these into doubt.

Lim et al. (2020): Not met. For Figure 5, had to guess the value for staff_per_shift.

Kim et al. (2021): Fully met.

Anagnostou et al. (2022): Fully met.

Johnson et al. (2021):

Reflections:

  • Provide all required parameters
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
🟡 N/A N/A

Shoaib and Ramamohan (2021): Not met. Although there was a scenario table, this did not include all the parameters I would need to change. It was more challenging to identify parameters that were only described in the body of the article. There were also some discrepancies in parameters between the main text of the article, and the tables and figures. Some scenarios were quite ambiguous/unclear from their description in the text, and I initially misunderstood the required parameters for the scenarios.

Huang et al. (2019): Not met. As described above, paper didn’t adequately describe all parameters.

Lim et al. (2020): Partially met. Nearly all parameters are in the paper table, and others are described in the article. However, didn’t provide information for the staff_per_shift for Figure 5.

Kim et al. (2021) and Anagnostou et al. (2022): Not applicable. All provided.

Johnson et al. (2021):

Reflections:

  • Provide parameters in a table (including for each scenario), as it can be difficult/ambiguous to interpret them from the text, and hard to spot them too.
  • Be sure to mention every parameter that gets changed (e.g. for Lim et al. (2020), as there wasn’t a default staff_per_shift across all scenarios, but not stated for the scenario, had to guess it).
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
N/A N/A N/A

Shoaib and Ramamohan (2021): Not met. It was unclear how to estimate inter-arrival time.

Huang et al. (2019): Fully met. The calculations for inter-arrival times were provided in the code, and the inputs to the code were the number of arrivals, as reported in the paper, and so making it easy to compare those parameters and check if numbers were correct or not.

Lim et al. (2020): Not applicable. The parameter not provided is not one that you would calculate.

Kim et al. (2021) and Anagnostou et al. (2022): Not applicable. All provided.

Johnson et al. (2021):

Reflections:

  • If you are going to be mentioning the “pre-processed” values at all, then its important to include the calculation (ideally in the code, as that is the clearest demonstration of exactly what you did)

5.5 Output format

Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)

Shoaib and Ramamohan (2021): Fully met. Outputs to .xlsx files

Huang et al. (2019), Lim et al. (2020), and Kim et al. (2021): Not met. Outputs to dataframe/s.

Anagnostou et al. (2022): Outputs to OUT_STATS.csv. Note: Although not needed for the reproduction itself, when I tried to amend the name and location of the csv file output the model for use in tests, this was very tricky to do as it was hard coded into the scripts and I found difficult to amend due to how the model is run and set up.

Johnson et al. (2021):

Reflections:

  • Common issue
  • Particularly important if model run time is even slightly long (even just minutes long, but even more so as becomes many minutes / hours), so don’t always have to re-run it each time to get results
  • Set up this in such a way that it is easy to change the name and location of the output file.
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)

Shoaib and Ramamohan (2021): Not met. There were two alternative results spreadsheets with some duplicate metrics but sometimes differing results between them, which made it a bit confusing to work out what to use.

Huang et al. (2019), Lim et al. (2020), and Anagnostou et al. (2022): Fully met. Didn’t experience issues interpreting the contents of the output table/s.

Kim et al. (2021): Not met. It took me a little while to work out what surgery columns I needed, and to realise I needed to combine two of them. This required looking at what inputs genreated this, and referring to a input data dictionary.

Johnson et al. (2021):

Reflections:

  • Don’t provide alternative results for the same metrics
  • Make it clear what each colum/category in the results table means, if it might not be immediately clear.

5.6 Seeds

Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)

Shoaib and Ramamohan (2021): Not met. The lack of seeds wasn’t actually a barrier to the reproduction though due to the replication number. I later add seeds so my results could be reproduced, and found that the ease of setting seeds with salabim was a greater facilitator to the work. I only had to change one or two lines of code to then get consistent results between runs (unlike other simulation software like SimPy where you have to consider the use of seeds by different sampling functions). Moreover, by default, salabim would have set a seed (although overridden by original authors to enable them to run replications).

Huang et al. (2019): Not met. It would have been beneficial to include seeds, as there was a fair amount of variability, so with seeds I could then I could be sure that my results do not differ from the original simply due to randomness.

Lim et al. (2020): Not met. The results obtained looked very similar to the original article, with minimal differences that I felt to be within the expected variation from the model stochasticity. However, if seeds had been present, we would have been able to say with certainty. I did not feel I needed to add seeds during the reproduction to get the same results.

Kim et al. (2021): Fully met. Included a seed, although I don’t get identical results as I had to reduce number of people in simulation.

Anagnostou et al. (2022): Fully met. The authors included a random seed so the results I got were identical to the original (so no need for any subjectivity in deciding whether its similar enough, as I could perfectly reproduce).

Johnson et al. (2021):

Reflections:

  • Depending on your model and the outputs/type of output you are looking at, the lack of seeds can have varying impacts on the appearance of your results, and can make the subjective judgement of whether results are consistent harder (if discrepancies could be attributed to not having consistent seeds or not).
  • It can be really quite simple to include seeds.

5.7 Code to produce article results

Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
N/A 🟡 N/A

Shoaib and Ramamohan (2021): Not met.

Huang et al. (2019): Not applicable. No tables in scope.

Lim et al. (2020): Partially met. It outputs the results in a similar structure to the paper (like a section of a table). However, it doesn’t have the full code to produce a table outright, for any of the tables, so additional processing still required.

Kim et al. (2021): Not met. Had to write code to generate tables, which included correctly implementing calculation of excess e.g. deaths, scaling to population size, and identify which columns provide the operation outcomes.

Anagnostou et al. (2022): Not applicable. No tables in scope.

Johnson et al. (2021):

Reflections:

  • It can take a bit of time to do this processing, so very handy for it to be provided.
  • Common issue.
Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)

Shoaib and Ramamohan (2021): Not met.

Huang et al. (2019): Not met. Had to write code from scratch. For one of the figures, it would have been handy if informed that plot was produced by a simmer function (as didn’t initially realise this). It also took a bit of time for me to work out how to transform the figure axes as this was not mentioned in the paper (and no code was provided for these). It was also unclear and a bit tricky to work out how to standardise the density in the figures (since it is only described in the text and no formula/calculations are provided there or in the code).

Lim et al. (2020), Kim et al. (2021) and Anagnostou et al. (2022): Not met. However, the simplicity and repetition of the figures was handy.

Johnson et al. (2021):

Reflections:

  • It can take a bit of time to do this processing, particularly if the figure involves any transformations (and less so if the figure is simple), so very handy for it to be provided.
  • Common issue.

By “in-text results”, I am referred to results that are mentioned in the text but not included in/cannot be deduced from any of the tables or figures.

Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
N/A N/A

Shoaib and Ramamohan (2021), Huang et al. (2019), Kim et al. (2021): Not met.

Lim et al. (2020), Anagnostou et al. (2022): Not applicable (no in-text results).

Johnson et al. (2021):

Reflections:

  • Provide code to calculate in-text results
  • Common issue

5.8 Documentation

Shoaib and Ramamohan (2021) Huang et al. (2019) Lim et al. (2020) Kim et al. (2021) Anagnostou et al. (2022) Johnson et al. (2021)
🟡

Shoaib and Ramamohan (2021): Not met. No instructions, although is just a single script that you run.

Huang et al. (2019): Not met. Not provided in runnable form but, regardless, no instructions for running it as it is provided (as a web application - i.e. no info on how to get that running).

Lim et al. (2020): Not met. No instructions, although is just a single script that you run.

Kim et al. (2021): Partially met. README tells you which folder has the scripts you need, although nothing further. Although all you need to do is run them.

Anagnostou et al. (2022): Fully met. Clear README with instructions on how to run the model was really helpful.

Reflections:

  • Even if as simple as running a script, include instructions on how to do so
  • Common issue

5.9 Other

Include tick marks/grid lines on figures, so it is easier to read across and judge whether a result is above or below a certain Y value.

Anagnostou et al. (2022): Included data dictionary for input parameters. Although I didn’t need this, this would have been great if I needed to change the input parameters at all.

Reflections:

  • Include grid lines
  • Include data dictionaries

5.10 References

Anagnostou, Anastasia, Derek Groen, Simon J. E. Taylor, Diana Suleimenova, Nura Abubakar, Arindam Saha, Kate Mintram, et al. 2022. FACS-CHARM: A Hybrid Agent-Based and Discrete-Event Simulation Approach for Covid-19 Management at Regional Level.” In 2022 Winter Simulation Conference (WSC), 1223–34. https://doi.org/10.1109/WSC57314.2022.10015462.
Huang, Shiwei, Julian Maingard, Hong Kuan Kok, Christen D. Barras, Vincent Thijs, Ronil V. Chandra, Duncan Mark Brooks, and Hamed Asadi. 2019. “Optimizing Resources for Endovascular Clot Retrieval for Acute Ischemic Stroke, a Discrete Event Simulation.” Frontiers in Neurology 10 (June). https://doi.org/10.3389/fneur.2019.00653.
Johnson, Kate M., Mohsen Sadatsafavi, Amin Adibi, Larry Lynd, Mark Harrison, Hamid Tavakoli, Don D. Sin, and Stirling Bryan. 2021. “Cost Effectiveness of Case Detection Strategies for the Early Detection of COPD.” Applied Health Economics and Health Policy 19 (2): 203–15. https://doi.org/10.1007/s40258-020-00616-2.
Kim, Lois G., Michael J. Sweeting, Morag Armer, Jo Jacomelli, Akhtar Nasim, and Seamus C. Harrison. 2021. “Modelling the Impact of Changes to Abdominal Aortic Aneurysm Screening and Treatment Services in England During the COVID-19 Pandemic.” PLOS ONE 16 (6): e0253327. https://doi.org/10.1371/journal.pone.0253327.
Lim, Chun Yee, Mary Kathryn Bohn, Giuseppe Lippi, Maurizio Ferrari, Tze Ping Loh, Kwok-Yung Yuen, Khosrow Adeli, and Andrea Rita Horvath. 2020. “Staff Rostering, Split Team Arrangement, Social Distancing (Physical Distancing) and Use of Personal Protective Equipment to Minimize Risk of Workplace Transmission During the COVID-19 Pandemic: A Simulation Study.” Clinical Biochemistry 86 (December): 15–22. https://doi.org/10.1016/j.clinbiochem.2020.09.003.
Shoaib, Mohd, and Varun Ramamohan. 2021. “Simulation Modelling and Analysis of Primary Health Centre Operations.” arXiv, June. https://doi.org/10.48550/arXiv.2104.12492.