
As part of the Hybrid Intelligence work package, a team from Maastricht University, in collaboration with the Netherlands eScience Center, has developed FAIVOR—a tool designed to facilitate the transparent validation of AI models before and during their application in clinical practice.
The FAIVOR (FAIR AI validation and quality control) tool addresses one of the key challenges in integrating artificial intelligence into healthcare: ensuring that pre-trained models remain reliable, fair, and safe when applied to new datasets. By providing a structured and reproducible evaluation process, FAIVOR enables clinicians and researchers to assess the trustworthiness of AI systems used in real-world settings.
The development of FAIVOR focuses on three main objectives:
• Data privacy: strict adherence to ethical and legal standards, ensuring no sensitive patient information is exposed.
• Model agnosticism: compatibility with models trained on different machine learning platforms (e.g., PyTorch, TensorFlow) and diverse data formats.
• Robust evaluation: going beyond accuracy metrics to assess fairness, uncertainty, and distributional similarity between datasets.
FAIVOR’s architecture consists of three integrated components:
1. Model package: to achieve model interoperability through Docker and a standardized REST API, complemented by a command-line tool that simplifies model packaging for researchers.
2. FAIR AI Library: a repository providing detailed metadata and container URIs for models, following the FAIR (Findable, Accessible, Interoperable, Reusable) principles.
3. Graphical user interface: a user-friendly application that retrieves models from the FAIR registry, performs local validations, generates statistical reports, and visualizes performance trends across hospitals and over time.
The first results of this project were presented at the Responsible AI in Health Care conference in Rotterdam, the Netherlands (craihc.com). During the event, Daniël Slob introduced the main objectives and preliminary results, highlighting FAIVOR’s potential to foster transparency and trust in the deployment of medical AI systems. This project will also be presented at the EFMI Special Topic Conference 2025 in Osnabrück (https://stc2025.efmi.org ).