Home (Current Conference)

The Fifth International Convention on the

Mathematics Of Neuroscience and AI

Tuesday 28th - Friday 31st May, 2024.

Rome.

Two decades into the 21st century, how close are we to to a unified mathematical model of the brain? How close are we to building an artificial intelligence that can surpass it?

In this exploratory symposium, we invite submissions presenting mathematical models of brain function or computational ideas about intelligence.

We give priority to those models that can account for brain or behavioural data, or provide simulations to that effect.

Keynote Speakers

Professor Peter Dayan

Max Planck Institute, Tübingen

Professor Sophie Deneve

Ecole Normale Supérieure, Paris

Professor Kevin Ellis

Cornell University

Dr Feryal Behbahani

Google DeepMind

Professor Wolfgang Maass

Technische Universität Graz

Dr Giovanni Pezzulo

National Research Council of Italy, Rome

Sessions

Chair: Dr Ilia Sucholutsky

Princeton

Chair: Dr Ishita Dasgupta

Google DeepMind

Invited speakers

Chair: Dr Francesca Mastrogiuseppe

Champalimaud Center for the Unknown

Chair: Dr James Whittington

Stanford / Oxford / Zyphra

Invited speakers

Chair: Dr Antonella Maselli

NRC Italy

Chair: Dr Ruairidh Battleday

Harvard

Invited speakers

Chair: Professor Dan Nicolau Jr

King’s College London

Chair: Yasmine Ayman

Harvard

Invited speakers

About the convention

One of the major scientific projects of the 20th century was the study of computation. We could build devices that could carry out some of the operations previously only possible in the human mind. This analogy and perspective has proven extremely productive, with neural and cognitive theories inspiring the development of powerful algorithms, and vice versa in the computational study of the brain and mind.

In this convention we aim to identify and develop novel computational frameworks for the study of the brain and mind, and take those findings back into the creation of novel algorithms for solving difficult problems and simulating intelligence.

Our content comes from four main fields: biocomputation, neural theory, cognitive science, and machine learning / artificial intelligence (AI). Each of these fields has developed a distinct computational language and set of concepts pertaining to a set of overlapping underlying principles.

By bringing leading researchers together from these fields together in on-line and off-line settings, we aim to build bridges between them, such that novel findings, insights, and frameworks can take spark.

Biocomputation

The prevailing modern scientific paradigm of the brain is a computational one. But if the brain is a computer—which is an 'if'—it must have operating principles, abilities and limitations that are radically different to those of artificial computers. In this session, talks will explore diverse topics within quantitative neuroscience that consider the brain as a device for computation, broadly conceived.

Session Chairs

Professor Dan V. Nicolau Jr (King’s College London)

Yasmine Ayman (Harvard University)

Keynote Talks

Professor Wolfgang Maass (Technische Universität Graz): Local prediction-learning in high-dimensional spaces enables neural networks to plan

Professor Sophie Deneve (Ecole Normale Supérieure, Paris)

Invited Talks

Professor Christine Grienberger (Brandeis): Dendritic computations underlying experience-dependent hippocampal representation

Professor Dan V. Nicolau Jr (King’s College London): A Rose by Any Other Name: Towards a Mathematical Theory of the Neuroimmune System

Dr James Whittington (Oxford / Stanford / Zyphra): Unifying the mechanisms of the hippocampal and prefrontal cognitive maps

Spotlight Talks

Paul Haider (University of Bern): Backpropagation through space, time and the brain

Deng Pan (Oxford): Structure learning in the human hippocampus and orbitofrontal cortex

Francesca Mignacco (CUNY Graduate Center & Princeton University): Nonlinear manifold capacity theory with contextual information

Angus Chadwick (University of Edinburgh): ROTATIONAL DYNAMICS ENABLES NOISE-ROBUST WORKING MEMORY

Carla Griffiths (Sainsbury Wellcome Centre): Neural mechanisms of auditory perceptual constancy emerge in trained animals

Harsha Gurnani (University of Washington): Feedback controllability constrains learning timescales of motor adaptation

Arash Golmohammadi (Department for Neuro- and Sensory Physiology, University Medical Center Göttingen): Heterogeneity as an algorithmic feature of neural networks

Sacha Sokoloski (University of Tuebingen): Analytically-tractable hierarchical models for neural data analysis and normative modelling

Alejandro Chinea Manrique de Lara (UNED): Cetacean's Brain Evolution: The Intriguing Loss of Cortical Layer IV and the Thermodynamics of Heat Dissipation in the Brain

Neural Theory

While neuroscientists have increasingly powerful deep learning models that predict neural responses, it is not clear that these models are correspondingly increasing our understanding of what neurons are actually doing. In this session, we will take a more mechanistic approach to understanding how networks of neurons afford complex computations, both by both considering mechanistic neural model along with mathematical theories that say how neurons should behave and crucially why they behave that way.

Session Chairs

Dr James Whittington (University of Oxford; Stanford University)

Dr Francesca Mastrogiuseppe (Champalimaud Center for the Unknown)

Keynote Talks

Professor Peter Dayan (Max Planck Institute, Tübingen): Controlling the Controller: Instrumental Manipulations of Pavlovian Influences via Dopamine

Professor Mackenzie Mathis (EPFL): Learnable Neural Dynamics

Invited Talks

Professor Athena Akrami (UCL): Circuits and computations for learning and exploiting sensory statistics

Professor Nicolas Brunel (Duke): Roles of inhibition in shaping the response of cortical networks

Dr Sophia Sanborn (Science): Symmetry and Universality

Dr Lea Duncker (Stanford): Evaluating dynamical systems hypotheses using direct neural perturbations

Dr Kris Jensen (UCL): An attractor model of planning in frontal cortex

Spotlight Talks

Cristiano Capone (ISS): Online network reconfiguration: non-synaptic learning in RNNs

Sam Hall-McMaster (Harvard University): Neural Prioritisation of Past Solutions Supports Generalisation

Alexander Mathis (EPFL): Modeling sensorimotor circuits with machine learning: hypotheses, inductive biases, latent noise and curricula

Stefano Diomedi (NRC Italy): Neural subspaces in three Parietal areas during reaching planning and execution

Sofia Raglio (Sapienza): Clones of biological agents solving cognitive task: hints on brain computation paradigms

Arno Granier (Bern): Confidence estimation and second-order errors in cortical circuits

Erik Hermansen (NTNU): The Ontogeny of the Grid Cell Network – Uncovering the Topology of Neural Representations

Steeve Laquitaine (EPFL): Cell types and layers differently shape the geometry of neural representations in a biophysically detailed model of the neocortical microcircuit.

Subhadra Mokashe (Brandeis University): Competition between memories for reactivation as a mechanism for long-delay credit assignment

Brendan A. Bicknell (UCL): Fast and slow synaptic plasticity enables concurrent control and learning

Vezha Boboeva (Sainsbury Wellcome Centre, UCL): Computational principles underlying the learning of sequential regularities in recurrent networks

Cognitive Science

Design by Amey Zhang

How should an intelligent agent behave in order to best realize their goals? What inferences or actions should they make in order to solve an important computational task? Cognitive science aims to answer these questions at an abstract computational level, using tools from probability theory, statistical inference, and elsewhere.

In this session we will discuss how such optimal behavior should change under different conditions of uncertainty, background knowledge, multiple agents, or constraints on resource. This can be used to understand human behavior in the real world or the lab, as well as build artificial agents that learn robust and generalizable world models from small amounts of data.

Session Chairs

Dr Ruairidh Battleday (Harvard / MIT)

Dr Antonella Maselli (NRC Italy)

Keynote Talks

Professor Anne Collins (UC Berkeley): Pitfalls and advances in computational cognitive modeling

Dr Giovanni Pezzulo (National Research Council of Italy, Rome): Embodied decision-making and planning

Invited Talks

Professor Bill Thompson (University of California, Berkeley): Interactive Discovery of Program-like Social Norms

Professor Dagmar Sternad (Northeastern): Human Control of Dynamically Complex Objects: Predictability, Stability and Embodiment

Professor Samuel McDougle (Yale): Abstractions in Motor Memory and Planning

Dr Fred Callaway (NYU / Harvard): Cultural evolution of compositional problem solving

Dr Maria Eckstein (DeepMind): Understanding Human Learning and Abstraction Using Cognitive Models and Artificial Neural Networks

Spotlight Talks

Nora Harhen (UC Irvine): Developmental differences in exploration reveal differences in structure inference

Simone D'Ambrogio (Oxford): Discovery of Cognitive Strategies for Information Sampling with Deep Cognitive Modelling and Investigation of their Neural Basis

Gaia Molinaro (UC Berkeley): Latent learning progress guides hierarchical goal selection in humans

Lucy Lai (Harvard): Policy regularization in the brain enables robustness and flexibility

Roey Schurr (Harvard): Dynamic computational phenotyping of human cognition

Yulin Dong (Peking): Optimal mental representation of social networks explains biases in social learning and perception

Antonino Visalli (Padova): Extensions of the Hierarchical Gaussian Filter to Wiener diffusion processes

Frank Tong (Vanderbilt): Improved modeling of human vision by incorporating robustness to blur in convolutional neural networks

Lance Ying (Harvard): Grounding Language about Belief in a Bayesian Theory-of-Mind

Jorge Eduardo Ramírez-Ruiz (Universitat Pompeu Fabra): The maximum occupancy principle (MOP) as a generative model of realistic behavior

Rory John Bufacchi (Chinese Academy of Sciences): Egocentric value maps of the near-body environment

Matteo Alleman (Columbia): Modeling behavioral imprecision from neural representations

Colin Conwell (Johns Hopkins): Is visual cortex really “language-aligned”? Perspectives from Model-to-Brain Comparisons in Human and Monkeys on the Natural Scenes Dataset

Ryan Low (UCL): A normative account of the psychometric function and how it changes with stimulus and reward distributions

Artificial Intelligence

Machine learning and artificial intelligence (AI) aim to create algorithms that solve difficult problems and simulate complex intelligent behavior. Many of these algorithms are based on findings and theory from the study of the brain and mind.

Recent rapid advances in these fields have seen the creation of algorithms and agents that can—finally—solve complex real-world problems across a wide range of domains. What are these advances, and how can we take them further? What remains beyond their capacity, and how can we overcome that? What might forever lie beyond their capabilities—or will anything?

In this session we will hear from some of the world’s leading experts in academia and tech. We will also hear from proponents of structure, and from proponents of scale. And we will also hear some radical suggestions for reframing many fundamental problems of intelligence.

Keynote Talks

Dr Feryal Behbahani (Google DeepMind)

Professor Kevin Ellis (Cornell): Doing experiments and acquiring concepts using language and code

Session Chairs

Dr Ishita Dasgupta (Google DeepMind)

Dr Ilia Sucholutsky (Princeton University)

Invited Talks

Professor Najoung Kim (BU, Google): Comparing human and machine inductive biases for compositional linguistic generalization using semantic parsing: Results and methodological challenges

Professor Rafal Bogacz (Oxford): Modelling diverse learning tasks with predictive coding

Dr André Barreto (DeepMind): Generalised policy updates and neuroscience

Dr Wilka Carvalho (Harvard): Predictive representations: building blocks of intelligence

Spotlight Talks

Quentin Ferry (MIT): Emergence and Function of Abstract Representations in Self-Supervised Transformers

Michael Spratling (University of Luxembourg): A margin-based replacement for cross-entropy loss that improves the robustness of deep neural networks on image classification tasks

Luke Eilers (University of Bern): A generalized neural tangent kernel for surrogate gradient learning

Samuel Lippl (Columbia University): The impact of task structure, representational geometry, and learning mechanism on compositional generalization

Anita Keshmirian (Ludwig Maximilian University of Munich): Investigating Causal Judgments in Humans and Large Language Models

Sunayana Rane (Princeton): Can Generative Multimodal Models Count to Ten?

Michael Lepori (Brown): A Mechanistic Analysis of Same-Different Relations in ViTs

Paul Riechers (Beyond Institute for Theoretical Science; BITS): Computational mechanics predicts internal representations of transformers

Aly Lidayan (UC Berkeley): RL Algorithms Are BAMDP Policies: Understanding Exploration, Intrinsic Motivation, and Optimality

Nasir Ahmad (Donders Institute for Brain, Cognition and Behaviour): Correlations are ruining your gradient descent

Motahareh Pourrahimi (McGill; Mila): Human-like Behavior and Neural Representations Emerge in a Neural Network Trained to Search for Natural Objects from Pixels

Pablo Lanillos (Spanish National Research Council): Object-centric reasoning and control from pixels

Chiara Mastrogiuseppe (Universitat Pompeu Fabra): Controlled Maximal Variability Leads to Reliable Performance in Recurrent Neural Networks

Topics

Including but not limited to:

  • biocomputation

  • cognition/protocognition

  • neural circuits and ANNs

  • computational neuroscience

  • machine learning

  • artificial intelligence

  • mathematical approaches to consciousness

  • algorithmic science

  • computational social science

Schedule


Tues 28th May 2024 (UTC+1)

08:00 - 09:00 Check in and registration

09:30 Opening remarks

Dr Ruairidh Battleday and Professor Dan Nicolau Jr

Session 1: Artificial Intelligence

09:50 - 10:00 Session Introduction (Dr Ishita Dasgupta and Dr Ilia Sucholutsky)

10:00 - 10:40 Keynote: Professor Kevin Ellis (Cornell): Doing experiments and acquiring concepts using language and code

10:40 - 11:00 Dr Andre Barreto (Google DeepMind): Generalised policy updates and neuroscience

11:00 - 11:20 Coffee Break

11:20 - 11:40 Dr Ilia Sucholutsky (Princeton): Learning from almost no data

11:40 - 12:00 Dr Wilka Carvalho (Harvard): Predictive representations: building blocks of intelligence

12:20 - 14:00 Lunch

14:00 - 14:20 Professor Rafal Bogacz (Oxford): Modelling diverse learning tasks with predictive coding

14:20 - 15:40 Spotlights

15:40 - 16:00 Coffee Break

16:00 - 16:40 Keynote: Dr Feryal Behbahani (DeepMind)

16:40 - 17:20 Panel: Fundamental Challenges in AI Research

17:20 - 19:00 Welcome reception (Villa Wolkonsky)

Weds 29th May 2024 (UTC+1)

Session 2: Biocomputation

09:50 - 10:00 Session Introduction (Professor Dan Nicolau Jr and Yasmine Ayman)

10:00 - 10:20 Professor Dan Nicolau Jr (KCL): A Rose by Any Other Name: Towards a Mathematical Theory of the Neuroimmune System

10:20 - 10:40 Professor Christine Grienberger (Brandeis): Dendritic computations underlying experience-dependent hippocampal representation

11:00 - 11:20 Coffee Break

11:20 - 12:20 Spotlights

12:20 - 14:00 Lunch

14:00 - 14:40 Keynote: Professor Wolfgang Maass (Technische Universität Graz): Local prediction-learning in high-dimensional spaces enables neural networks to plan

14:40 - 15:00 Dr James Whittington (Oxford / Stanford / Zyphra): Unifying the mechanisms of the hippocampal and prefrontal cognitive maps

15:00 - 16:00 Poster Session 1

16:00 - 16:20 Professor Najoung Kim (BU, Google; REMOTE): Comparing human and machine inductive biases for compositional linguistic generalization using semantic parsing: Results and methodological challenges

16:20 - 17:00 Virtual Poster Session

21:00 - 22:00 Neuromonster Arts Salon (Taylor Beck; Hotel San Giovanni, Downstairs Meeting Room)

Thurs 30th May 2024 (UTC+1)

Session 3: Cognitive science

09:50 - 10:00 Session Introduction (Dr Antonella Maselli and Dr Ruairidh Battleday)

10:00 - 10:40 Keynote: Professor Anne Collins: Pitfalls and advances in computational cognitive modeling

10:40 - 11:00 Professor Bill Thompson (University of California, Berkeley): Interactive Discovery of Program-like Social Norms

11:00 - 11:20 Coffee Break

11:20 - 11:40 Dr Fred Callaway (NYU / Harvard): Cultural evolution of compositional problem solving

11:40 - 12:00 Dr Maria Eckstein (DeepMind): Understanding Human Learning and Abstraction Using Cognitive Models and Artificial Neural Networks

12:00 - 12:20 Professor Samuel McDougle (Yale): Abstractions in Motor Memory and Planning

12:20 - 14:00 Lunch

14:00 - 14:40 Keynote: Dr Giovanni Pezzulo (NRC of Italy): Embodied decision-making and planning

14:40 - 15:00 Professor Dagmar Sternad (Northeastern): Predictability and stability in the manipulation of complex objects

15:00 - 15:40 Spotlights

15:40 - 16:00 Coffee Break

16:00 - 16:20 Spotlights

16:20 - 17:00 Poster Session 2

19:30 Coaches leave Hotel San Giovanni for Conference Dinner

Fri 31st May 2024 (UTC+1)

Session 4: Neural theory

09:50 - 10:00 Session Introduction (Dr James Whittington and Dr Francesca Mastrogiuseppe)

10:00 - 10:40 Keynote: Professor Peter Dayan: Controlling the Controller: Instrumental Manipulations of Pavlovian Influences via Dopamine

10:40 - 11:00 Dr Sophia Sanborn (Science): Symmetry and Universality

11:00 - 11:20 Coffee Break

11:20 - 11:40 Professor Athena Akrami (UCL): Circuits and computations for learning and exploiting sensory statistics

11:40 - 12:00 Professor Nicolas Brunel (Duke): Roles of inhibition in shaping the response of cortical networks

12:20 - 14:00 Lunch

14:00 - 14:20 Dr Lea Duncker (Stanford): Evaluating dynamical systems hypotheses using direct neural perturbations

14:20 - 14:40 Dr Kris Jensen (UCL): An attractor model of planning in frontal cortex

14:40 - 15:40 Spotlights

15:40 - 16:00 Coffee Break

16:00 - 16:40 Keynote: Professor Mackenzie Mathis (EPFL): Learnable Neural Dynamics

16:40 - 17:20 Panel: The Future of Computational Neuroscience

17:20 - 19:00 Closing reception (Villa Wolkonsky)

Venue

Villa Wolkonsky

Via Ludovico di Savoia

00185 Roma RM, Italy

Google Maps

Built to entertain a Russian princess’s literary friends, the Villa Wolkonsky has a rich history, including art, war, love, and betrayal.

https://en.wikipedia.org/wiki/Villa_Wolkonsky

Conference Chairs

Dr Ruairidh Battleday

Center for Brain Science,

Harvard University

Center for Brains, Minds, Machines

MIT

Dr James Whittington

Sir Henry Dale Wellcome Trust Fellow

Stanford / Oxford

Dr Giovanni Pezzulo

Director, CONAN LAB

National Research Council of Italy, Institute of Cognitive Sciences and Technologies (ISTC-CNR), Rome

Professor Dan V. Nicolau Jr

King’s College London

Nuffield Department of Medicine,

University of Oxford

Sponsors

Many thanks to our generous sponsors:

AIJ

Harvard University, Department of Psychology