FAIR Benchmark - Institutional Repository Datasets

circle-exclamation

FAIRsharing is collaborating with a number of organisations, including the University of Oxford, to develop a benchmark suitable for datasets within institutional repositories. This benchmark is built primarily by re-using existing generic metrics. This reuse strengthens transparency and comparability, as all components (benchmark, metrics, as well as the underlying tests, definitions and justifications) are openly registered and discoverable. Rather than inventing a completely separate interpretation of FAIR, the benchmark demonstrates how communities can adopt shared components to explicitly define how FAIR should be interpreted and tested in their own context. This approach makes assessment behaviour visible and reusable, enabling communities to state clearly what FAIR means to them while remaining aligned with a broader, transparent ecosystem of metrics and benchmarks.

The Institutional Repository Datasetsarrow-up-right Benchmark provides a structured framework for delivering FAIR assistance to metadata describing research datasets held within institutional repositories. It operationalises the FAIR principles in a practical way, supporting alignment with community-endorsed research standards and best practices. The benchmark is primarily intended for two audiences:

  1. institutional repository teams, who implement and run the associated assessment tests as part of repository workflows; and

  2. affiliated researchers, who deposit their datasets to the repository and therefore use the benchmark outcomes to better understand the FAIRness of their datasets and identify areas for improvement, thereby strengthening their knowledge of FAIR and the FAIRness of their data.

By providing a consistent and transparent evaluation approach, the benchmark supports coordinated FAIR assessment of dataset metadata across institutional repositories and across all research disciplines. This benchmark distinguishes between repository-implemented FAIR properties and record-level FAIR properties, and clearly states where metrics have been re-used and where modifications or specialist metrics were required. The benchmark is designed to be applied to institutional repository records once they have been made publicly available.

Below are the metrics created during the development of this benchmark. We encourage the re-use of these metrics whenever they are applicable to other communities.

FAIR Metric – I3 – Metadata – Qualified References to Versioned Records

Last updated

Was this helpful?