This page evaluates the extent to which the author-published research artefacts meet the criteria of badges related to reproducibility from various organisations and journals.
Caveat: Please note that these criteria are based on available information about each badge online. Moreover, we focus only on reproduction of the discrete-event simulation, and not on other aspects of the article. We cannot guarantee that the badges below would have been awarded in practice by these journals.
Criteria
Code
from IPython.display import display, Markdownimport numpy as npimport pandas as pd# Criteria and their definitionscriteria = {'archive': 'Artefacts are archived in a repository that is: (a) public (b) guarantees persistence (c) gives a unique identifier (e.g. DOI)','licence': 'Open licence','complete': 'Complete (all relevant artefacts available)','docs1': 'Documents (a) how code is used (b) how it relates to article (c) software, systems, packages and versions','docs2': 'Documents (a) inventory of artefacts (b) sufficient description for artefacts to be exercised','relevant': 'Artefacts relevant to paper','execute': 'Scripts can be successfully executed','careful': 'Artefacts are carefully documented and well-structured to the extent that reuse and repurposing is facilitated, adhering to norms and standards','reproduce': 'Reproduced results (assuming (a) acceptably similar (b) reasonable time frame (c) only minor troubleshooting)','readme': 'README file with step-by-step instructions to run analysis','dependencies': 'Dependencies (e.g. package versions) stated','correspond': 'Clear how output of analysis corresponds to article'}# Evaluation for this studyeval= pd.Series({'archive': 0,'licence': 1,'complete': 0,'docs1': 0,'docs2': 1,'relevant': 1,'execute': 1,'careful': 1,'reproduce': 0,'readme': 0,'dependencies': 0,'correspond': 0})# Get list of criteria met (True/False) overalleval_list =list(eval)# Define function for creating the markdown formatted list of criteria metdef create_criteria_list(criteria_dict):''' Creates a string which contains a Markdown formatted list with icons to indicate whether each criteria was met Parameters: ----------- criteria_dict : dict Dictionary where keys are the criteria (variable name) and values are Boolean (True/False of whether this study met the criteria) Returns: -------- formatted_list : string Markdown formatted list ''' callout_icon = {True: '✅',False: '❌'}# Create list with... formatted_list =''.join(['* '+ callout_icon[eval[key]] +# Icon based on whether it met criteria' '+ value +# Full text description of criteria'\n'for key, value in criteria_dict.items()])return(formatted_list)# Define groups of criteriacriteria_share_how = ['archive', 'licence']criteria_share_what = ['complete', 'relevant']criteria_doc_struc = ['docs1', 'docs2', 'careful', 'readme', 'dependencies', 'correspond']criteria_run = ['execute', 'reproduce']# Create text sectiondisplay(Markdown(f'''To assess whether the author's materials met the requirements of each badge, a list of criteria was produced. Between each badge (and between categories of badge), there is often alot of overlap in criteria.This study met **{sum(eval_list)} of the {len(eval_list)}** unique criteria items. These were as follows:Criteria related to how artefacts are shared -{create_criteria_list({k: criteria[k] for k in criteria_share_how})}Criteria related to what artefacts are shared -{create_criteria_list({k: criteria[k] for k in criteria_share_what})}Criteria related to the structure and documentation of the artefacts -{create_criteria_list({k: criteria[k] for k in criteria_doc_struc})}Criteria related to running and reproducing results -{create_criteria_list({k: criteria[k] for k in criteria_run})}'''))
To assess whether the author’s materials met the requirements of each badge, a list of criteria was produced. Between each badge (and between categories of badge), there is often alot of overlap in criteria.
This study met 5 of the 12 unique criteria items. These were as follows:
Criteria related to how artefacts are shared -
❌ Artefacts are archived in a repository that is: (a) public (b) guarantees persistence (c) gives a unique identifier (e.g. DOI)
✅ Open licence
Criteria related to what artefacts are shared -
❌ Complete (all relevant artefacts available)
✅ Artefacts relevant to paper
Criteria related to the structure and documentation of the artefacts -
❌ Documents (a) how code is used (b) how it relates to article (c) software, systems, packages and versions
✅ Documents (a) inventory of artefacts (b) sufficient description for artefacts to be exercised
✅ Artefacts are carefully documented and well-structured to the extent that reuse and repurposing is facilitated, adhering to norms and standards
❌ README file with step-by-step instructions to run analysis
❌ Dependencies (e.g. package versions) stated
❌ Clear how output of analysis corresponds to article
Criteria related to running and reproducing results -
✅ Scripts can be successfully executed
❌ Reproduced results (assuming (a) acceptably similar (b) reasonable time frame (c) only minor troubleshooting)
Badges
Code
# Full badge namesbadge_names = {# Open objects'open_acm': 'ACM "Artifacts Available"','open_niso': 'NISO "Open Research Objects (ORO)"','open_niso_all': 'NISO "Open Research Objects - All (ORO-A)"','open_cos': 'COS "Open Code"','open_ieee': 'IEEE "Code Available"',# Object review'review_acm_functional': 'ACM "Artifacts Evaluated - Functional"','review_acm_reusable': 'ACM "Artifacts Evaluated - Reusable"','review_ieee': 'IEEE "Code Reviewed"',# Results reproduced'reproduce_acm': 'ACM "Results Reproduced"','reproduce_niso': 'NISO "Results Reproduced (ROR-R)"','reproduce_ieee': 'IEEE "Code Reproducible"','reproduce_psy': 'Psychological Science "Computational Reproducibility"'}# Criteria required by each badgebadges = {# Open objects'open_acm': ['archive'],'open_niso': ['archive', 'licence'],'open_niso_all': ['archive', 'licence', 'complete'],'open_cos': ['archive', 'licence', 'docs1'],'open_ieee': ['complete'],# Object review'review_acm_functional': ['docs2', 'relevant', 'complete', 'execute'],'review_acm_reusable': ['docs2', 'relevant', 'complete', 'execute', 'careful'],'review_ieee': ['complete', 'execute'],# Results reproduced'reproduce_acm': ['reproduce'],'reproduce_niso': ['reproduce'],'reproduce_ieee': ['reproduce'],'reproduce_psy': ['reproduce', 'readme', 'dependencies', 'correspond']}# Identify which badges would be awarded based on criteria# Get list of badges met (True/False) overallaward = {}for badge in badges: award[badge] =all([eval[key] ==1for key in badges[badge]])award_list =list(award.values())# Write introduction# Get list of badges met (True/False) by categoryaward_open = [v for k,v in award.items() if k.startswith('open_')]award_review = [v for k,v in award.items() if k.startswith('review_')]award_reproduce = [v for k,v in award.items() if k.startswith('reproduce_')]# Create and display text for introductiondisplay(Markdown(f'''In total, the original study met the criteria for **{sum(award_list)} of the {len(award_list)} badges**. This included:* **{sum(award_open)} of the {len(award_open)}** “open objects” badges* **{sum(award_review)} of the {len(award_review)}** “object review” badges* **{sum(award_reproduce)} of the {len(award_reproduce)}** “reproduced” badges'''))# Make function that creates collapsible callouts for each badgedef create_badge_callout(award_dict):''' Displays Markdown callouts created for each badge in the dictionary, showing whether the criteria for that badge was met. Parameters: ----------- award_dict : dict Dictionary where key is badge (as variable name), and value is Boolean (whether badge is awarded) ''' callout_appearance = {True: 'tip',False: 'warning'} callout_icon = {True: '✅',False: '❌'} callout_text = {True: 'Meets all criteria:',False: 'Does not meet all criteria:'}for key, value in award_dict.items():# Create Markdown list with... criteria_list =''.join(['* '+ callout_icon[eval[k]] +# Icon based on whether it met criteria' '+ criteria[k] +# Full text description of criteria'\n'for k in badges[key]])# Create the callout and display it display(Markdown(f'''::: {{.callout-{callout_appearance[value]} appearance="minimal" collapse=true}}## {callout_icon[value]}{badge_names[key]}{callout_text[value]}{criteria_list}:::'''))# Create badge functions with introductions and calloutsdisplay(Markdown('''### "Open objects" badgesThese badges relate to research artefacts being made openly available.'''))create_badge_callout({k: v for (k, v) in award.items() if k.startswith('open_')})display(Markdown('''### "Object review" badgesThese badges relate to the research artefacts being reviewed against criteria of the badge issuer.'''))create_badge_callout({k: v for (k, v) in award.items() if k.startswith('review_')})display(Markdown('''### "Reproduced" badgesThese badges relate to an independent party regenerating the reuslts of the article using the author objects.'''))create_badge_callout({k: v for (k, v) in award.items() if k.startswith('reproduce_')})
In total, the original study met the criteria for 0 of the 12 badges. This included:
0 of the 5 “open objects” badges
0 of the 3 “object review” badges
0 of the 4 “reproduced” badges
“Open objects” badges
These badges relate to research artefacts being made openly available.
❌ ACM “Artifacts Available”
Does not meet all criteria:
❌ Artefacts are archived in a repository that is: (a) public (b) guarantees persistence (c) gives a unique identifier (e.g. DOI)
❌ NISO “Open Research Objects (ORO)”
Does not meet all criteria:
❌ Artefacts are archived in a repository that is: (a) public (b) guarantees persistence (c) gives a unique identifier (e.g. DOI)
✅ Open licence
❌ NISO “Open Research Objects - All (ORO-A)”
Does not meet all criteria:
❌ Artefacts are archived in a repository that is: (a) public (b) guarantees persistence (c) gives a unique identifier (e.g. DOI)
✅ Open licence
❌ Complete (all relevant artefacts available)
❌ COS “Open Code”
Does not meet all criteria:
❌ Artefacts are archived in a repository that is: (a) public (b) guarantees persistence (c) gives a unique identifier (e.g. DOI)
✅ Open licence
❌ Documents (a) how code is used (b) how it relates to article (c) software, systems, packages and versions
❌ IEEE “Code Available”
Does not meet all criteria:
❌ Complete (all relevant artefacts available)
“Object review” badges
These badges relate to the research artefacts being reviewed against criteria of the badge issuer.
❌ ACM “Artifacts Evaluated - Functional”
Does not meet all criteria:
✅ Documents (a) inventory of artefacts (b) sufficient description for artefacts to be exercised
✅ Artefacts relevant to paper
❌ Complete (all relevant artefacts available)
✅ Scripts can be successfully executed
❌ ACM “Artifacts Evaluated - Reusable”
Does not meet all criteria:
✅ Documents (a) inventory of artefacts (b) sufficient description for artefacts to be exercised
✅ Artefacts relevant to paper
❌ Complete (all relevant artefacts available)
✅ Scripts can be successfully executed
✅ Artefacts are carefully documented and well-structured to the extent that reuse and repurposing is facilitated, adhering to norms and standards
❌ IEEE “Code Reviewed”
Does not meet all criteria:
❌ Complete (all relevant artefacts available)
✅ Scripts can be successfully executed
“Reproduced” badges
These badges relate to an independent party regenerating the reuslts of the article using the author objects.
❌ ACM “Results Reproduced”
Does not meet all criteria:
❌ Reproduced results (assuming (a) acceptably similar (b) reasonable time frame (c) only minor troubleshooting)
❌ NISO “Results Reproduced (ROR-R)”
Does not meet all criteria:
❌ Reproduced results (assuming (a) acceptably similar (b) reasonable time frame (c) only minor troubleshooting)
❌ IEEE “Code Reproducible”
Does not meet all criteria:
❌ Reproduced results (assuming (a) acceptably similar (b) reasonable time frame (c) only minor troubleshooting)
Blohowiak, Ben B., Johanna Cohoon, Lee de-Wit, Eric Eich, Frank J. Farach, Fred Hasselman, Alex O. Holcombe, Macartan Humphreys, Melissa Lewis, and Brian A. Nosek. 2023. “Badges to AcknowledgeOpenPractices.”https://osf.io/tvyxz/.
Hardwicke, Tom E., and Simine Vazire. 2024. “Transparency IsNow the Default at PsychologicalScience.”Psychological Science 35 (7): 708–11. https://doi.org/10.1177/09567976231221573.