Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Causal generative world models for the perception of physical and social motion in human brains and AI


   UKRI Centre for Doctoral Training in Socially Intelligent Artificial Agents

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Prof Lars Muckli, Dr Fani Deligianni  No more applications being accepted  Funded PhD Project (Students Worldwide)

About the Project

Biological and artificial systems need to make sense of the world around them by interpreting sensory evidence in the context of internal models of the physical and social world. In humans, these world models are well-described by causal generative models, encoding the statistical and causal dependencies in the world and enabling mental simulation and predictions of social (Heider & Simmel, 1944) and physical events (Battaglia et al., 2013). Artificial intelligence (AI) is starting to construct causal generative models of the world, as these enable robust inference, powerful and flexible generalization, and learning in low-sample regimes (Schölkopf & von Klügelgen, 2022).

One of the key questions for biological and artificial systems is how sensory evidence should be contextualized within a causal generative model for the purpose of perceptual inference. The predictive processing framework proposes that the human brain contextualizes by combining top-down predictions from the generative model with bottom-up sensory evidence (Petro & Muckli 2020).

So far, little is known whether evidence-contextualization for physical and social entities employs shared or different mechanisms. This project will study how the integration of predictions with sensory evidence occurs for physical and social events and interactions.

Proposed Methods and Expected Results

We will use a novel task in which participants track physical objects and/or agents that move and interact with physical or agent-like dynamics. Concurrent Ultra High Field FMRI measurements and cortical layer-resolved decoding will enable us to dissociate top-down predictions, bottom-up sensory evidence, and error representations.

We will further disentangle the contribution of predictions and sensory evidence by introducing visual occlusions (requiring predictions and mental simulation in the absence of visual input) and violations of expectations (i.e., unexpected switch of the task-generative motion and interaction model from physical to agent-like or vice versa).

We hypothesize that (1) early visual cortices are the locus of error computation for both physical and social perceptual signals. E.g., we expect to find predictions related to both visual objects and agents in the deep layers of V1 in occluded areas and prediction errors in superficial layers. (2) We expect predictions to originate in different higher-level cortices. Namely, we expect object-driven cortex (Yildirim et al., 2019) to be involved in predicting physical motion and interactions, whereas we expect frontal (Sliwa & Freiwald, 2017) and temporal-parietal regions (Frith & Frith, 2006) as the source for predictions of social motion and interaction.

We will, furthermore, construct generative brain-inspired recurrent artificial neural network models for physical and social motion and interactions. We will compare three classes of models: (a) joint models for physical and social motion and interactions, (b) separate models for physical and social motion and interaction, and (c) models with a shared early perceptual stage and disjoint later stages. (3) We expect that the latter class best captures human brain-pattern activations and to be the most sample-efficient AI model in terms of learning.

Impact for Artificial Social Intelligence

Key to human-centric social AI is the alignment of causal generative world models of humans and AI, that underly the perception of our world. Our proposal aims to shed light on these generative models of humans by interrogating the predictions made by their generative world model. The expected results and constructed models are, therefore, important stepping stones toward building social AI that is aligned with humans in their perception of physical and social events.

Eligibility

Applicants must have or expect to obtain the equivalent of a 1st or 2:1 degree in any subject relevant to the CDT including, but not limited to, computing science, psychology, linguistics, mathematics, sociology, engineering, physics, etc.

Applicants will be asked to provide two references as part of their application.

Funding

Funding is available to cover the annual tuition fees for UK home applicants, as well as an annual stipend at the standard UKRI rate (currently £17,668 for 2022/23). To be classed as a home applicant, candidates must meet the following criteria:

  • Be a UK National (meeting residency requirements), or
  • Have settled status, or
  • Have pre-settled status (meeting residency requirements), or
  • Have indefinite leave to remain or enter.

As per UKRI funding guidelines, up to 30% of studentships may be awarded to international applicants who do not meet the UK home status requirements. Funding for successful international students will match that of home students and no international top-up fees will be payable. 

Computer Science (8)
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

 About the Project