Skip to content

Quality of Personalization, Explainability and Robustness of Recommendation Algorithms

A Master's Thesis Project for evaluating recommendation algorithms on Quality, Personalization, Explainability, and Robustness.


📖 Overview

This project provides a in-depth analysis of recommendation algorithms, focusing on their resilience to data stress, resistance to anonymization, explainability, and the ethical risks associated with their implementation. The research is particularly relevant given the recently adopted EU AI Act and the Omnibus Directive.

Built on Open Source

This project extends and unifies implementations from the Microsoft Recommenders library and PGPR, providing a custom dataset loader and evaluation pipeline for comparative analysis.


📚 Documentation

Use the cards below to navigate to the section you need.

  • Getting Started


    Complete installation guide and first experiment setup.

    Get Started

  • Run Experiments


    Configure and execute algorithm comparisons.

    Run Experiments

  • Results & Analysis


    Research findings, publications, and how to cite this work.

    See Results

  • Architecture


    Code structure, design patterns, and extension points.

    Explore Architecture


❓ Research Questions

This research addresses the following key questions:

  1. Robustness Analysis: How do different recommendation algorithm families compare in terms of resilience to data anonymization and perturbation techniques?
  2. Privacy-Personalization Trade-off: What is the relationship between recommendation accuracy, personalization quality, and user privacy preservation?
  3. Explainability Assessment: To what extent can each algorithm generate meaningful explanations for its recommendations?
  4. Ethical Risk Evaluation: How can ethical risks be identified, measured, and mitigated in accordance with EU AI Act requirements?

🤝 Contributing & Support