# FAIR Benchmark - Institutional Repository Datasets

{% hint style="warning" %}
This feature is in active development.
{% endhint %}

FAIRsharing is collaborating with a number of organisations, including the University of Oxford, to develop a benchmark suitable for datasets within institutional repositories. This benchmark is built primarily by re-using existing generic metrics. This reuse strengthens transparency and comparability, as all components (benchmark, metrics, as well as the underlying tests, definitions and justifications) are openly registered and discoverable. Rather than inventing a completely separate interpretation of FAIR, the benchmark demonstrates how communities can adopt shared components to explicitly define how FAIR should be interpreted and tested in their own context. This approach makes assessment behaviour visible and reusable, enabling communities to state clearly what FAIR means to them while remaining aligned with a broader, transparent ecosystem of metrics and benchmarks.

The [Institutional Repository Datasets](https://fairsharing.org/7598) Benchmark provides a structured framework for delivering FAIR assistance to metadata describing research datasets held within institutional repositories. It operationalises the FAIR principles in a practical way, supporting alignment with community-endorsed research standards and best practices. The benchmark is primarily intended for two audiences:&#x20;

1. **institutional repository teams**, who implement and run the associated assessment tests as part of repository workflows; and&#x20;
2. **affiliated researchers**, who deposit their datasets to the repository and therefore use the benchmark outcomes to better understand the FAIRness of their datasets and identify areas for improvement, thereby strengthening their knowledge of FAIR and the FAIRness of their data.&#x20;

By providing a consistent and transparent evaluation approach, the benchmark supports coordinated FAIR assessment of dataset metadata across institutional repositories and across all research disciplines. This benchmark distinguishes between repository-implemented FAIR properties and record-level FAIR properties, and clearly states where metrics have been re-used and where modifications or specialist metrics were required. The benchmark is designed to be applied to institutional repository records once they have been made publicly available.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://fairsharing.gitbook.io/fairsharing/about-our-records/fair-assistance/fair-benchmark-institutional-repository-datasets.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
