The general goal of the research group is to understand how the brain perceives, integrates, and interprets multisensory signals into coherent neural representations and predictions of the surrounding world, in particular, during multimodal face-to-face interactions between humans. We do so by applying a multimodal approach including cutting-edge techniques in psychophysics, psycholinguistics, eye-tracking, fMRI and MEG
Multimodal Processing of face_face-to-face interactions:
- Are multisensory communicative signals integrated in the brain along a dedicated lateral pathway following domain-general principles of Bayesian Causal Inference?
- Audio-visual speech representation in the brain: to what degree heard and seen speech is 'audio-visual'? The role of premotor and motor cortex in audio-visual speech perception and prediction.
Brain Development and Cross-Modal Plasticity:
- The role of sensory experience, and its interplay with genetics, in the shaping of brain functional specialization and the emergence of cross-modal brain plasticity;
- Combining neurobiological and behavioural indices to predict speech and language outcome after cochlear implant in deaf children and adults.
Audio-Visual Speech Processing:
- the role of lip-reading in speech perception/production; developmental trajectories of lip-reading abilities across life-time span; learning to process speech without listening or seeing speech; audio-visual speech processing in deaf individuals with cochlear implant; the relationship between lip- and text-reading.
Stefania Benetti, Principal Investigator
Giulia Mazzi, Post-Lauream Research Trainee
For a complete list see Stefania Benetti personal webpage
- University of Trento Starting Grant Giovani Ricercatori, January 2022 - December 2023.
- Ambra Ferrari, Max Planck Institute of Psycholinguistics
- Francesco Pavani, CAtS Group, CIMeC
- Olivier Collignon, Universitè Catholique Louvain
- Daphne Maurer, McMaster University
- Claudio Zmarich, ISTC-CNR Padua