Loading...
Projects / Programmes source: ARIS

DeepFake Detection using Anomaly Detection Methods (DeepFake DAD)

Research activity

Code Science Field Subfield
2.06.00  Engineering sciences and technologies  Systems and cybernetics   

Code Science Field
2.02  Engineering and Technology  Electrical engineering, Electronic engineering, Information engineering 
Keywords
Computer vision, facial images, deep fakes, anomaly detection, one-class learning, deep learning, representation learning
Evaluation (metodology)
source: COBISS
Points
6,330.61
A''
580.94
A'
2,478.3
A1/2
3,237.21
CI10
6,566
CImax
1,170
h10
37
A1
20.49
A3
5.75
Data for the last 5 years (citations for the last 10 years) on October 15, 2025; Data for score A3 calculation refer to period 2020-2024
Data for ARIS tenders ( 04.04.2019 – Programme tender, archive )
Database Linked records Citations Pure citations Average pure citations
WoS  176  2,908  2,619  14.88 
Scopus  301  5,203  4,594  15.26 
Organisations (3) , Researchers (15)
1538  University of Ljubljana, Faculty of Electrical Engineering
no. Code Name and surname Research area Role Period No. of publicationsNo. of publications
1.  55920  Žiga Babnik  Computer science and informatics  Young researcher  2023 - 2025  13 
2.  58386  Marko Brodarič  Computer science and informatics  Researcher  2023 - 2025 
3.  38118  PhD Klemen Grm  Systems and cybernetics  Researcher  2023 - 2025  53 
4.  53879  Marija Ivanovska  Systems and cybernetics  Researcher  2023 - 2025  36 
5.  31985  PhD Janez Križaj  Systems and cybernetics  Researcher  2023 - 2025  43 
6.  20183  PhD Boštjan Murovec  Computer science and informatics  Researcher  2025  224 
7.  21310  PhD Janez Perš  Systems and cybernetics  Researcher  2023 - 2025  256 
8.  28458  PhD Vitomir Štruc  Systems and cybernetics  Head  2023 - 2025  418 
1539  University of Ljubljana, Faculty of Computer and Information Science
no. Code Name and surname Research area Role Period No. of publicationsNo. of publications
1.  22472  PhD Borut Batagelj  Computer science and informatics  Researcher  2025  214 
2.  58386  Marko Brodarič  Computer science and informatics  Researcher  2023 - 2025 
3.  53819  PhD Blaž Meden  Computer science and informatics  Researcher  2023 - 2025  62 
4.  19226  PhD Peter Peer  Computer science and informatics  Researcher  2023 - 2025  458 
5.  56901  Darian Tomašević  Computer science and informatics  Young researcher  2023 - 2025  11 
6.  52095  Matej Vitek  Computer science and informatics  Researcher  2023 - 2025  24 
1986  ALPINEON R & D
no. Code Name and surname Research area Role Period No. of publicationsNo. of publications
1.  12000  PhD Jerneja Žganec Gros  Computer science and informatics  Researcher  2023 - 2025  292 
Abstract
Advances in artificial intelligence and deep learning, especially in the fields of computer vision and generative models, have made manipulating images and video footage in a photo-realistic manner increasingly accessible, lowering the skill ceiling and the time investment needed to produce visually convincing tampered (fake) footage or imagery. This has resulted in the rise of so-called deepfakes, machine-learning methods specifically designed to produce fake footage en masse. While legal and ethical uses of deepfake exist, their use towards legally and ethically questionable ends including blackmail, fake news and involuntary pornography is especially troubling. The sheer quantity of deepfake footage that can be generated automatically can quickly overwhelm human reviewers, which is already the case for human-generated video content on popular video-sharing sites such as YouTube that primarily rely on automated algorithms to filter out illegal and undesirable content. With the increased use of convenient authentication schemes based on video-conferencing tools by the financial and public sector, (future) real-time deepfake technology also has the potential to facilitate identity theft with considerable implications for people’s finances and property. To prevent such illicit activities enabled by deepfake technologies, it is paramount to have highly automated and reliable means of detecting deepfakes at one’s disposal. Such detection technology not only enables efficient (large-scale) screening of image and video content, but also allows non-experts to identify whether a given video or image is real or manipulated. Within the proposed fundamental research project Deepfake detection using anomaly detection methods (DeepFake DAD) we will address this issue and conduct research on fundamentally novel methods for deepfake detection that address the deficiencies of current solutions in this problem domain. Existing deepfake detectors rely on (semi-)handcrafted features that have been shown to work against a predefined set of publicly available/known deepfake generation methods. However, detection techniques developed in this manner are vulnerable (i.e., unable to detect) to unknown or unseen (future) deepfake generation methods. The goal of DeepFake DAD is, therefore, to develop detection models that can be trained in a semi-supervised or unsupervised manner without relying on training samples from publicly available deepfake generation techniques, i.e., within so-called anomaly detection frameworks trainable in a one-clear learning regime. The main tangible result of the research project will be highly robust deepfake detection solutions that outperform the current state-of-the-art in terms of generalization capabilities and can assist end-users and platform providers to automatically detect tampered imagery and video, allowing them to act accordingly and avoid the negative personal, societal, economic, and political implications of widespread, undetectable fake footage. The feasibility of the project is ensured by the past performance of the research group and the extensive experience of the project partners in generative deep models, face manipulation and facial analysis tasks.
Views history
Favourite