Broadly speaking, we study computational processes involved in human language with particular attention to semantics. Much of our current work has focused on the meaning of quantification in natural language, logical reasoning, and explaining cross-linguistic similarities between languages. In our research, we have paid particular attention to how various complexity measures can help us understand the difficulty of core cognitive abilities, such as language learning, comprehension, or reasoning. Our approach is characterized by a mixture of formal (logic, computational modeling, simulations) and empirical (neurobehavioral experiments, corpus linguistics) methods. It draws inspiration from methods and concepts in theoretical computer science, e.g., formal language theory, computational complexity theory, logic, machine learning, and information theory. The underlying goal is to understand human cognition better from various theoretical angles and contribute to the mathematically and computationally rigorous theories of language and cognition.

Research directions

Our current research focuses on three main lines: 

  1. Computational explanations of semantic universal across languages
  2. Psycholinguistic research on the meaning representations and their variability 
  3. Studying reasoning abilities of humans and artificial neural networks


  • Jakub Szymanik, Principal Investigator
  • Tamar Johnson, Reserach Fellow
  • Alexandra Sarafoglou, Research Fellow
  • Manuel Vargas Guzmán, PhD student
  • Heming Strømholt Bremnes, PhD student 
  • Saskia Leymann, PhD student
  • Gert-Jan Munneke, PhD student
  • Iris van de Pol, PhD student


Recent representative papers (for complete list see: https://jakubszymanik.com/articles/):

  1. Milica Denić and Jakub Szymanik. Are most and more than half truth-conditionally equivalent? Journal of Semantics, 2022.
  2. Fausto Carcassi and Jakub Szymanik. Neural Networks track the logical complexity of Boolean concepts. Open Mind, 2022.
  3. Fausto Carcassi, Shane Steinert-Threlkeld, and Jakub Szymanik. Monotone Quantifiers Emerge via Iterated Learning. Cognitive Science, 45 (8), 2021.
  4. S. Steinert-Threlkeld and J. Szymanik. Ease of Learning Explains Semantic Universals, Cognition, vol 195, 2020.
  5. Jakub Szymanik. Quantifiers and Cognition. Logical and Computational Perspectives, Studies in Linguistics and Philosophy, Springer, 2016.


Language in Interaction grant, Sharing vague meanings
ABC Project Grant, From rigid theory to cognitive models: individual differences in semantic
NCN grant: Hybrid Reasoning Models

Ongoing Collaborations

I have been collaborating with many researchers across the world, including currently: University of Amsterdam, Polish Academy of Sciences, University of Washington, Utrecht University, Tel Aviv University, Heinrich Heine Universität in Düsseldorf, Tuebingen University, Norwegian University of Science and Technology, Facebook AI Research, and more. 


Research Group webpage