Quality of Personalization, Explainability and Robustness of Recommendation Algorithms¶
A Master's Thesis Project for evaluating recommendation algorithms on Quality, Personalization, Explainability, and Robustness.
📖 Overview¶
This project provides a in-depth analysis of recommendation algorithms, focusing on their resilience to data stress, resistance to anonymization, explainability, and the ethical risks associated with their implementation. The research is particularly relevant given the recently adopted EU AI Act and the Omnibus Directive.
Built on Open Source
This project extends and unifies implementations from the Microsoft Recommenders library and PGPR, providing a custom dataset loader and evaluation pipeline for comparative analysis.
📚 Documentation¶
Use the cards below to navigate to the section you need.
-
Getting Started
Complete installation guide and first experiment setup.
-
Run Experiments
Configure and execute algorithm comparisons.
-
Results & Analysis
Research findings, publications, and how to cite this work.
-
Architecture
Code structure, design patterns, and extension points.
❓ Research Questions¶
This research addresses the following key questions:
- Robustness Analysis: How do different recommendation algorithm families compare in terms of resilience to data anonymization and perturbation techniques?
- Privacy-Personalization Trade-off: What is the relationship between recommendation accuracy, personalization quality, and user privacy preservation?
- Explainability Assessment: To what extent can each algorithm generate meaningful explanations for its recommendations?
- Ethical Risk Evaluation: How can ethical risks be identified, measured, and mitigated in accordance with EU AI Act requirements?
🤝 Contributing & Support¶
- 🐛 Bug Reports: GitHub Issues
- 💬 Questions & Ideas: GitHub Discussions
- 🛠️ Code Contributions: See our Contributing Guidelines