Overview

The large quantity of data available on the internet has led to the deployment of extremely powerful artificially intelligent models. Such models, however, require an impressive amount of computational resources for training and inference. This limits the deployment of such models in a real-world scenario outside the sandbox of massive data centers.
Inspired by the capability of human brains to learn new concepts efficiently, the main activity of the group is to research novel training techniques that reduce the computational burden of neural models, improving the ability to learn new concepts continuously and extract knowledge from different modalities.

Research directions

The key topics of interest are:

  • Low supervision models
  • Self-supervision
  • Cross-modal learning
  • Incremental Learning
  • Debiasing neural networks
  • Domain Adaptation and Generalization

Members

Publications

For a complete list see Paolo Rota scholar profile.

Web

Research group webpage