Pilot 1 - Decision Aid

TIER2's Decision Aid will provide clarity on the meaning, relevance, and feasibility of ‘reproducibility’ for researchers to aid them in identifying what type of reproducibility is relevant for their research and indicate what they must consider regarding how feasible such ‘reproducibility’ would be for them. The tool will be piloted with two researcher groups (qualitative and machine learning researchers).

Stakeholders: Researchers, publishers, funders

Timeline: -

Objectives: 

  • To explore through piloting to what extent the decision aid (tool) is useful and efficient.

  • The ultimate goal of the tool is to facilitate ‘reproducibility’ where it is relevant, as well as feasible and prevent demands for ‘reproducibility’ where it is irrelevant and/or unfeasible.  

Pilot 2 - Reproducibility Management Plan (RMP)

Reproducibility requires more than data management: software, hardware, methods, workflows, and their connections and executors must be documented. The Reproducibility Management Plan (RMP) fills this gap by extending DMPs into comprehensive reproducibility planning tools. Co-created with 89 participants across 15 countries from 4 regions and deployed in the production-ready ARGOS platform, it address the full research lifecycle. CHIST-ERA pioneered adoption in the ICT projects, requiring unified Data & Software Management Plans as RMPs. The framework structures thinking around what will be produced, how it will be documented, where it will be deposited, who can access it, and how others can reuse it, thus creating an actionable blueprint that adapts to any research context. Built into ARGOS, it produces machine-actionable exports using the DMP Common Standard, integrates FAIRsharing for standards and policies, and creates qualified references that connect datasets to software, methods, and workflows. This approach helps researchers prevent downstream problems, enables funders to monitor reproducibility, and shifts reproducibility from reactive afterthought to proactive planning.

Stakeholders: Researchers, research communities, funders, and service providers

Timeline: -

Objectives: 

  • To emphasise reproducibility activities within a research output management lifecycle;

  • To streamline reproducibility practices in publicly funded research projects;

  • To provide tools and guidance to adopt best reproducibility practices;

  • To generate case studies to promote a common understanding of reproducibility across various domains.

Pilot 3 - Reproducible Workflows

Computational experiments are essential for modern research, yet their complexity often hinders reproducibility. The TIER2 Pilot 3 publication “A Virtual Laboratory for Managing Computational Experiments” introduces SCHEMA lab, an open-source virtual environment that enables the design, execution, and tracking of containerised experiments with full provenance. By capturing configurations, datasets, software environments, and performance metrics, SCHEMA lab enhances reproducibility and transparency across disciplines. It supports individual researchers and research infrastructures in organising, comparing, and reusing computational workflows, fostering credible, reusable, and FAIR digital science practices.

Stakeholders: Life scientists, computer scientists 

Timeline: -

Objectives: The main goal was to customise and evaluate tools/practices for reproducible workflows in life and computer sciences, with underlying objectives of extending the SCHEMA open-source platform to support reproducibility in both fields by leveraging software containerisation, workflow description languages (CWL, Snakemake), and experiment packaging specifications (RO-crate), particularly emphasising machine learning in computer science.

Pilot 4 - Reproducibility Checklists for Computational Social Science Research

Computational social scientists can enhance the transparency and reproducibility of their work using the simplified Reproducibility Checklist integrated into Methods Hub. The checklist provides essential documentation requirements—covering data, code, computational environment, and sharing—ensuring that methods are easier to understand, reuse, and evaluate. It is embedded in the Methods Hub submission workflow and publicly available in the method guidelines. Developed through literature review and informed by survey insights, the checklist has been refined for usability and tested through user studies and an experimental evaluation. Workshops, training modules, and a published paper support adoption and promote reproducible computational research.

Stakeholders: Computational Social Scientists (Research Producers and consumers)

Timeline: -

Objectives: The goal of this Pilot was to enhance reproducibility for code and data through checklists in the research lifecycle. 

Pilot 5 - Reproducibility Promotion Plans for Funders

The Reproducibility Promotion Plan for Funders (RPP) provides a comprehensive policy template with actionable recommendations to help funders foster reproducible research practices across three key areas of their work: evaluation and monitoring, policy and definitions, and incentives. The RPP serves as both guidance and inspiration for research funders to develop robust internal practices that strengthen their funding processes, while also managing external expectations by clearly communicating reproducibility requirements to researchers. Designed to be adaptable, the RPP offers best practice examples and practical tools that funders can tailor to their specific institutional contexts and levels of experience. Ultimately, the RPP aims to catalyse cultural and procedural change within funding organisations, empowering them to support researchers in conducting rigorous, transparent, and reproducible research.

Stakeholders: Funders

Timeline: -

Objectives: This Pilot aimed to help funders improve the research quality of the projects and researchers they fund to build and sustain trustworthy outcomes.

Pilot 6 - Reproducibility Monitoring Dashboard

The Reproducibility Monitoring Dashboard Pilot aims to develop tools that enable funding agencies to track and monitor the reusability of research artifacts across various projects, programs, topics, and disciplines. This auto-generated dashboard assesses the impacts of policies related to data and code sharing. Furthermore, we are establishing essential requirements to make the dashboard user-friendly for publishers.

Stakeholders: Research Performing Organisations (RPOs), Funders, Publishers and Researchers

Timeline: -

Objectives: Develop tools that allow funding agencies to track and monitor the reusability of research artifacts:

  • Develop, extend, and test a suite of tools for tracking major research artifacts in Computer Sciences such as datasets, software, and methods, with a particular focus on Artificial Intelligence.

  • Quantify and estimate reusability indicators based on various types of artifacts.

  • Design and implement a dashboard that allows funding agencies to track and monitor the reusability of research artifacts created in funded projects.


Pilot 7 - Editorial Workflows to Increase Data Sharing

The Editorial Workflows to Increase Data Sharing aim at helping authors make their research more transparent and reproducible. The workflow provides practical guidance on sharing data in trusted repositories, sent automatically when a revision to a manuscript submitted to a journal is requested. Acting at this key stage, it encourages immediate data sharing without affecting the editorial decision. Light-touch, scalable, and adaptable across disciplines, it supports journals and researchers in improving compliance with data sharing policies and fostering a culture of openness. 

Stakeholders: Publishers

Timeline: -

Objectives: The pilot aimed improve our knowledge on data sharing with two activities:

- A randomised controlled trial of an intervention targeting data availability statements with the aim to increase deposition of data in trusted repositories.

- A Delphi-study to gather consensus on the most pressing issues and best paths to improve sharing of research data underlying publications.

Pilot 8 - An Editorial Reference Handbook for Reproducibility and FAIRness

The Editorial Reference Handbook informs and assists scholarly publishers in supporting the sharing of digital research objects and in operationalising findable, accessible, interoperable and reusable (FAIR) research practices by addressing gaps in editorial workflows, policy implementation and stakeholder alignment. The Handbook comprises three interrelated components—a checklist, detailed guidance, and a flowchart—intended primarily for in-house editorial staff while also providing value to reviewers, authors, and service providers.  

The Pilot included representatives of Cambridge University Press, Cell Press, EMBO Press, F1000 (Taylor & Francis), GigaScience Press, Lancet, Oxford University Press, PLOS, Springer Nature, Wiley.

Stakeholders: Publishers

Timeline: -

Objectives:  The Handbook is created to help put the requirements of the journal data policy in action:

  • journals that already have their own internal guidance will be able to use the handbook to validate and refine their existing methodology;
  • journals that do not yet have their own internal guidance should use it as an opportunity to define their own process.

This work will be of use to in-house editorial staff managing the manuscripts, but also benefit reviewers, authors on what compliance to the journal data policy may require, as well as developers to drive their service provisions to publishers.


Back to the top