Loading...
Projects / Programmes source: ARIS

Mechanistic Interpretability for Explainable Biometric Artificial Intelligence (MIXBAI)

Research activity

Code Science Field Subfield
2.07.00  Engineering sciences and technologies  Computer science and informatics   

Code Science Field
1.02  Natural Sciences  Computer and information sciences 
Keywords
Deep Learning, Biometric Systems, Explainable Artificial Intelligence
Evaluation (metodology)
source: COBISS
Points
6,814.59
A''
534.89
A'
2,633.2
A1/2
3,484.5
CI10
5,207
CImax
499
h10
36
A1
22.02
A3
1.72
Data for the last 5 years (citations for the last 10 years) on October 15, 2025; Data for score A3 calculation refer to period 2020-2024
Data for ARIS tenders ( 04.04.2019 – Programme tender, archive )
Database Linked records Citations Pure citations Average pure citations
WoS  174  2,892  2,606  14.98 
Scopus  282  5,163  4,593  16.29 
Organisations (2) , Researchers (15)
1538  University of Ljubljana, Faculty of Electrical Engineering
no. Code Name and surname Research area Role Period No. of publicationsNo. of publications
1.  55920  Žiga Babnik  Computer science and informatics  Young researcher  2023 - 2025  13 
2.  11805  PhD Simon Dobrišek  Computer science and informatics  Researcher  2023 - 2025  296 
3.  38118  PhD Klemen Grm  Systems and cybernetics  Head  2023 - 2025  53 
4.  53879  Marija Ivanovska  Systems and cybernetics  Researcher  2023 - 2025  36 
5.  31985  PhD Janez Križaj  Systems and cybernetics  Researcher  2023 - 2025  43 
6.  50843  PhD Jon Natanael Muhovič  Computer science and informatics  Researcher  2023 - 2025  29 
7.  21310  PhD Janez Perš  Systems and cybernetics  Researcher  2023 - 2025  256 
8.  53724  Peter Rot  Computer science and informatics  Researcher  2024 - 2025  32 
9.  28458  PhD Vitomir Štruc  Systems and cybernetics  Researcher  2023 - 2025  418 
1539  University of Ljubljana, Faculty of Computer and Information Science
no. Code Name and surname Research area Role Period No. of publicationsNo. of publications
1.  53820  PhD Žiga Emeršič  Computer science and informatics  Researcher  2023 - 2025  103 
2.  53819  PhD Blaž Meden  Computer science and informatics  Researcher  2023 - 2025  62 
3.  54781  Tim Oblak  Computer science and informatics  Researcher  2023 - 2025  20 
4.  19226  PhD Peter Peer  Computer science and informatics  Researcher  2023 - 2025  458 
5.  53724  Peter Rot  Computer science and informatics  Researcher  2025  32 
6.  52095  Matej Vitek  Computer science and informatics  Researcher  2023 - 2025  24 
Abstract
Recent advances in the field of artificial intelligence have enabled significant breakthroughs in automated biometric systems for discriminative and generative fields. In many applications, AI-powered and automated biometric systems are able to match or surpass human abilities, while enabling the application of biometric systems at a massive scale compared to manual, human-supervised solutions. Given the performance and the increasingly widespread use of automated biometric systems, it is important for researchers to be able to explain their functionality and determine whether their biometric decisions are based on sound principles, i.e., such that they are made in a fair, unbiased and non-discriminative manner, while respecting user privacy and data protection to the greatest extent possible. Biometric systems that meet these criteria are highly desirable, which as also recently been recognized by national- and EU-level regulations regarding data protection, user privacy, the right to an explanation, and similar legal frameworks. Mechanistic interpretability is one of the latest proposed frameworks for explaining state-of-the-art deep learning AI models, which aims to produce a gears-level understanding of deep learning models as opposed to previous methods that aimed to produce decision attributions while treating the model as a black box. Within the scope of the proposed MIXBAI research project, we will both extend the capabilities of the latest AI explainability techniques, and apply them to the study of modern biometric AI systems, in order to produce the kind of model and decision explanations that allow automated biometric AI systems to be used safely and legally within the various existing and prospective regulatory frameworks. Increasing the transparency of biometric AI decision making will also increase user trust in these systems, precluding possible public controversies regarding their use.
Views history
Favourite