
As part of the Hybrid Intelligence work package, a team from Maastricht University, in collaboration with the Netherlands eScience Center, has developed FAIVOR—a tool designed to facilitate the transparent validation of AI models before and during their application in clinical practice.
The FAIVOR (FAIR AI validation and quality control) tool addresses one of the key challenges in integrating artificial intelligence into healthcare: ensuring that pre-trained models remain reliable, fair, and safe when applied to new datasets. By providing a structured and reproducible evaluation process, FAIVOR enables clinicians and researchers to assess the trustworthiness of AI systems used in real-world settings.
The development of FAIVOR focuses on three main objectives:
FAIVOR’s architecture consists of three integrated components:
The first results of this project were presented at the Responsible AI in Health Care conference in Rotterdam, the Netherlands (craihc.com). During the event, Daniël Slob introduced the main objectives and preliminary results, highlighting FAIVOR’s potential to foster transparency and trust in the deployment of medical AI systems. This project will also be presented at the EFMI Special Topic Conference 2025 in Osnabrück (https://stc2025.efmi.org ).