6  Evaluation of the repository

The code and related research artefacts in the original code repositories were evaluated against:

Between each journal badge, there was often alot of overlap in criteria. Hence, a list of unique criteria was produced. The repositories are evaluated against this criteria, and then depending on which criteria they met, against the badges themselves.

Caveat: Please note that these criteria are based on available information about each badge online, and that we have likely differences in our procedure (e.g. allowed troubleshooting for execution and reproduction, not under tight time pressure to complete). Moreover, we focus only on reproduction of the discrete-event simulation, and not on other aspects of the article. We cannot guarantee that the badges below would have been awarded in practice by these journals.

6.1 Summary

Unique badge criteria:

Badges:

Essential components of STARS framework:

Optional components of STARS framework:

6.2 Journal badges

Key:

In this section and below, the criteria for each study are marked as either being fully met (✅), partially met (🟡), not met (❌) or not applicable (N/A).

Unique criteria:

Item S H L K A J
Criteria related to how artefacts are shared
Stored in a permanent archive that is publicly and openly accessible
Has a persistent identifier
Includes an open license
Criteria related to what artefacts are shared
Artefacts are relevant to and contribute to the article’s results
Complete set of materials shared (as would be needed to fully reproduce article)
Criteria related to the structure and documentation of the artefacts
Artefacts are well structured/organised (e.g. to the extent that reuse and repurposing is facilitated, adhering to norms and standards of research community)
Artefacts are sufficiently documented (i.e. to understand how it works, to enable it to be run, including package versions)
Artefacts are carefully documented (more than sufficient - i.e. to the extent that reuse and repurposing is facilitated - e.g. changing parameters, reusing for own purpose)
Artefacts are clearly documented and accompanied by a README file with step-by-step instructions on how to reproduce results in the manuscript
Criteria related to running and reproducing results
Scripts can be successfully executed
Independent party regenerated results using the authors research artefacts
Reproduced within approximately one hour (excluding compute time)

 

Badges:

The badges are grouped into three categories:

  • “Open objects” badges: These badges relate to research artefacts being made openly available.
  • “Object review” badges: These badges relate to the research artefacts being reviewed against criteria of the badge issuer.
  • “Reproduced” badges: These badges relate to an independent party regenerating the reuslts of the article using the author objects.
Item S H L K A J
“Open objects” badges
NISO “Open Research Objects (ORO)”
• Stored in a permanent archive that is publicly and openly accessible
• Has a persistent identifier
• Includes an open license
NISO “Open Research Objects - All (ORO-A)”
• Stored in a permanent archive that is publicly and openly accessible
• Has a persistent identifier
• Includes an open license
• Complete set of materials shared (as would be needed to fully reproduce article)
ACM “Artifacts Available”
• Stored in a permanent archive that is publicly and openly accessible
• Has a persistent identifier
COS “Open Code”
• Stored in a permanent archive that is publicly and openly accessible
• Has a persistent identifier
• Includes an open license
• Complete set of materials shared (as would be needed to fully reproduce article)
• Artefacts are sufficiently documented (i.e. to understand how it works, to enable it to be run, including package versions)
IEEE “Code Available”
• Complete set of materials shared (as would be needed to fully reproduce article)
“Object review” badges
ACM “Artifacts Evaluated - Functional”
• Artefacts are sufficiently documented (i.e. to understand how it works, to enable it to be run, including package versions)
• Artefacts are relevant to and contribute to the article’s results
• Complete set of materials shared (as would be needed to fully reproduce article)
• Scripts can be successfully executed
ACM “Artifacts Evaluated - Reusable”
• Artefacts are sufficiently documented (i.e. to understand how it works, to enable it to be run, including package versions)
• Artefacts are carefully documented (more than sufficient - i.e. to the extent that reuse and repurposing is facilitated - e.g. changing parameters, reusing for own purpose)
• Artefacts are relevant to and contribute to the article’s results
• Complete set of materials shared (as would be needed to fully reproduce article)
• Scripts can be successfully executed
• Artefacts are well structured/organised (e.g. to the extent that reuse and repurposing is facilitated, adhering to norms and standards of research community)
IEEE “Code Reviewed”
• Complete set of materials shared (as would be needed to fully reproduce article)
• Scripts can be successfully executed
“Reproduced” badges
NISO “Results Reproduced (ROR-R)”
• Independent party regenerated results using the authors research artefacts
ACM “Results Reproduced”
• Independent party regenerated results using the authors research artefacts
IEEE “Code Reproducible”
• Independent party regenerated results using the authors research artefacts
Psychological Science “Computational Reproducibility”
• Independent party regenerated results using the authors research artefacts
• Reproduced within approximately one hour (excluding compute time)
• Artefacts are well structured/organised (e.g. to the extent that reuse and repurposing is facilitated, adhering to norms and standards of research community)
• Artefacts are clearly documented and accompanied by a README file with step-by-step instructions on how to reproduce results in the manuscript

6.3 STARS framework

Key:

Item S H L K A J
Essential components
Open license
Free and open-source software (FOSS) license (e.g. MIT, GNU Public License (GPL))
Dependency management
Specify software libraries, version numbers and sources (e.g. dependency management tools like virtualenv, conda, poetry)
🟡
FOSS model
Coded in FOSS language (e.g. R, Julia, Python)
Minimum documentation
Minimal instructions (e.g. in README) that overview (a) what model does, (b) how to install and run model to obtain results, and (c) how to vary parameters to run new experiments
ORCID
ORCID for each study author
Citation information
Instructions on how to cite the research artefact (e.g. CITATION.cff file)
Remote code repository
Code available in a remote code repository (e.g. GitHub, GitLab, BitBucket)
Open science archive
Code stored in an open science archive with FORCE11 compliant citation and guaranteed persistance of digital artefacts (e.g. Figshare, Zenodo, the Open Science Framework (OSF), and the Computational Modeling in the Social and Ecological Sciences Network (CoMSES Net))
Optional components
Enhanced documentation
Open and high quality documentation on how the model is implemented and works (e.g. via notebooks and markdown files, brought together using software like Quarto and Jupyter Book). Suggested content includes:
• Plain english summary of project and model
• Clarifying license
• Citation instructions
• Contribution instructions
• Model installation instructions
• Structured code walk through of model
• Documentation of modelling cycle using TRACE
• Annotated simulation reporting guidelines
• Clear description of model validation including its intended purpose
Documentation hosting
Host documentation (e.g. with GitHub pages, GitLab pages, BitBucket Cloud, Quarto Pub)
Online coding environment
Provide an online environment where users can run and change code (e.g. BinderHub, Google Colaboratory, Deepnote)
Model interface
Provide web application interface to the model so it is accessible to less technical simulation users
Web app hosting
Host web app online (e.g. Streamlit Community Cloud, ShinyApps hosting)
🟡

6.4 Timings

  • Shoaib and Ramamohan (2021) - 30m
  • Huang et al. (2019) - 17m
  • Lim et al. (2020) - 18m
  • Kim et al. (2021) - 18m
  • Anagnostou et al. (2022) - 19m
  • Johnson et al. (2021)

6.5 Badge sources

National Information Standards Organisation (NISO) (NISO Reproducibility Badging and Definitions Working Group (2021))

  • “Open Research Objects (ORO)”
  • “Open Research Objects - All (ORO-A)”
  • “Results Reproduced (ROR-R)”

Association for Computing Machinery (ACM) (Association for Computing Machinery (ACM) (2020))

  • “Artifacts Available”
  • “Artifacts Evaluated - Functional”
  • “Artifacts Evaluated - Resuable”
  • “Results Reproduced”

Center for Open Science (COS) (Blohowiak et al. (2023))

  • “Open Code”

Institute of Electrical and Electronics Engineers (IEEE) (Institute of Electrical and Electronics Engineers (IEEE) (n.d.))

  • “Code Available”
  • “Code Reviewed”
  • “Code Reproducible”

Psychological Science (Hardwicke and Vazire (2023) and Association for Psychological Science (APS) (2023))

  • “Computational Reproducibility”

6.6 References

Anagnostou, Anastasia, Derek Groen, Simon J. E. Taylor, Diana Suleimenova, Nura Abubakar, Arindam Saha, Kate Mintram, et al. 2022. FACS-CHARM: A Hybrid Agent-Based and Discrete-Event Simulation Approach for Covid-19 Management at Regional Level.” In 2022 Winter Simulation Conference (WSC), 1223–34. https://doi.org/10.1109/WSC57314.2022.10015462.
Association for Computing Machinery (ACM). 2020. “Artifact Review and Badging Version 1.1.” ACM. https://www.acm.org/publications/policies/artifact-review-and-badging-current.
Association for Psychological Science (APS). 2023. “Psychological Science Submission Guidelines.” APS. https://www.psychologicalscience.org/publications/psychological_science/ps-submissions.
Blohowiak, Ben B., Johanna Cohoon, Lee de-Wit, Eric Eich, Frank J. Farach, Fred Hasselman, Alex O. Holcombe, Macartan Humphreys, Melissa Lewis, and Brian A. Nosek. 2023. “Badges to Acknowledge Open Practices.” https://osf.io/tvyxz/.
Hardwicke, Tom E., and Simine Vazire. 2023. “Transparency Is Now the Default at Psychological Science.” Psychological Science 0 (0). https://doi.org/https://doi.org/10.1177/09567976231221573.
Huang, Shiwei, Julian Maingard, Hong Kuan Kok, Christen D. Barras, Vincent Thijs, Ronil V. Chandra, Duncan Mark Brooks, and Hamed Asadi. 2019. “Optimizing Resources for Endovascular Clot Retrieval for Acute Ischemic Stroke, a Discrete Event Simulation.” Frontiers in Neurology 10 (June). https://doi.org/10.3389/fneur.2019.00653.
Institute of Electrical and Electronics Engineers (IEEE). n.d. “About Content in IEEE Xplore.” IEEE Explore. Accessed May 20, 2024. https://ieeexplore.ieee.org/Xplorehelp/overview-of-ieee-xplore/about-content.
Johnson, Kate M., Mohsen Sadatsafavi, Amin Adibi, Larry Lynd, Mark Harrison, Hamid Tavakoli, Don D. Sin, and Stirling Bryan. 2021. “Cost Effectiveness of Case Detection Strategies for the Early Detection of COPD.” Applied Health Economics and Health Policy 19 (2): 203–15. https://doi.org/10.1007/s40258-020-00616-2.
Kim, Lois G., Michael J. Sweeting, Morag Armer, Jo Jacomelli, Akhtar Nasim, and Seamus C. Harrison. 2021. “Modelling the Impact of Changes to Abdominal Aortic Aneurysm Screening and Treatment Services in England During the COVID-19 Pandemic.” PLOS ONE 16 (6): e0253327. https://doi.org/10.1371/journal.pone.0253327.
Lim, Chun Yee, Mary Kathryn Bohn, Giuseppe Lippi, Maurizio Ferrari, Tze Ping Loh, Kwok-Yung Yuen, Khosrow Adeli, and Andrea Rita Horvath. 2020. “Staff Rostering, Split Team Arrangement, Social Distancing (Physical Distancing) and Use of Personal Protective Equipment to Minimize Risk of Workplace Transmission During the COVID-19 Pandemic: A Simulation Study.” Clinical Biochemistry 86 (December): 15–22. https://doi.org/10.1016/j.clinbiochem.2020.09.003.
Monks, Thomas, Alison Harper, and Navonil Mustafee. 2024. “Towards Sharing Tools and Artefacts for Reusable Simulations in Healthcare.” Journal of Simulation 0 (0): 1–20. https://doi.org/10.1080/17477778.2024.2347882.
NISO Reproducibility Badging and Definitions Working Group. 2021. “Reproducibility Badging and Definitions.” https://doi.org/10.3789/niso-rp-31-2021.
Shoaib, Mohd, and Varun Ramamohan. 2021. “Simulation Modelling and Analysis of Primary Health Centre Operations.” arXiv, June. https://doi.org/10.48550/arXiv.2104.12492.