Overview

With Artificial Intelligence (AI) becoming more pervasive and powerful, there is a growing need to ensure that AI agents behave consistently with our interests, goals, abilities, and limitations.  We aim at studying the underpinnings of trustworthy AI and Machine Learning (ML), and at developing artificial agents that can understand and communicate with humans (like users, domain experts, and other stakeholders) and that can adapt and align to their users by quickly integrating feedback in the form of explanations, knowledge, instructions and preferences. Our research has broad applications to all fields in which AI and ML are applied, from medical decision making to smart personal assistants.

Research directions

Our research encompasses conceptual development, theoretical analysis, and empirical evaluation of AI and ML approaches.  The key topics of interest are:

  • Explainable AI;
  • Interactive Machine Learning;
  • Human-interpretable Representation Learning;
  • Neuro-symbolic Integration of Learning and Reasoning;
  • Preference Elicitation;

Members

Publications

For a complete list see Stefano Teso’s scholar profile.

Ongoing collaborations

  • Antonio Vergari, University of Edinburgh (England);
  • Davide Mottin, University of Aarhus (Denmark);
  • Guy van den Broeck, UCLA (USA);
  • Kristian Kersting, TU Darmstadt (Germany);
  • Andrea Passerini, University of Trento (Italy);