How landmark and self-motion cues combine during navigation to generate spatial representations?

The excerpt note is about how combine landmark and self-motion cues for navigation from Campbell et al., 2018.

Campbell, Malcolm G., Samuel A. Ocko, Caitlin S. Mallory, Isabel I. C. Low, Surya Ganguli & Lisa M. Giocomo. Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation. Nature Neuroscience, volume 21, pages1096–1106 (2018).

To navigate, the brain combines self-motion information with sensory landmarks to form a position estimate. The neural substrates thought to support such position coding include functionally defined medial entorhinal cortex (MEC) cell types, namely grid cells, head direction cells, border cells, and speed cells. Together, these neurons generate an internal map of space, with their codes emerging from interactions between self-motion cues, such as locomotion and optic flow, and sensory cues from environmental landmarks.

However, the principles by which MEC cells integrate self-motion versus landmark cues remain incompletely understood. How multisensory self-motion cues combine to drive MEC speed cells remains equally unknown. In addition, while previous works often ascribe the neural basis of path integration to MEC functionally defined cell types, the degree to which behaviourally measured path integration position estimates and MEC neural codes follow the same cue combination principles remains unclear.

Here, the authors examine the principles by which both mouse behaviour and MEC cell classes integrate self-motion with visual landmark cues. To do this, they analysed the neural activity and behaviour of mice while they explored virtual reality (VR) environments. By combining these experimental approaches with an attractor-based network model, they propose a framework for understanding how optic flow, locomotion and landmark cues interact to generate MEC firing patterns and behavioural position estimates during navigation.

 

A coupled-oscillator attractor network model elucidates principles for the integration of landmarks and self-motion.

Combined, their data point to an asymmetry in the integration of location and visual cues by grid and speed cells during gain changes. What underlying principles govern this cue-integration process? Previous work has shown that grid cells rely on self-motion input, which can reflect locomotion and optic flow cues, as well as an error-correcting signal provided by landmarks. However, gain changes alter the relationship between distance travelled and the location s of landmarks, as well as the relationship between locomotion and optic flow. Therefore, the responses observed in their data likely reflect a complex interaction between the effects gain changes have on these different relationships. To better understand these dynamics, they modelled the integration of self-motion with landmark input in a 1D attractor network (Fig.5).

 

Fig. 5 A coupled-oscillator attractor network model of the integration of landmarks and self-motion input by grid cells. (Campbell et al., 2018)

They added external landmark inputs to standard attractor-based path integration machinery, in which grid cells are modelled as a 1D periodic network of neurons with short-range excitatory and long-range inhibitory synaptic weight profiles. In the absence of external input, this neural architecture yields a family of steady-state bump activity patterns, in which grid cell responses are generated when the animal’s velocity is used drive phase advance in the network. External landmark inputs drive neuronal activity that changes as a function of the animal’s position relative to landmark cues and serve to reinforce the phase of the attractor network (Fig. 5 b,c).

In this framework, gain changes correspond to a mismatch between the phase, or position estimate, of the attractor network (red arrow, Fig. 5 a,c). and the phase of the landmark input(blue arrow, Fig. 5 b,c). In this situation, landmark inputs exert a corrective force on the attractor phase, pulling it toward the landmark phase (Fig. 5d). The dynamics governing this process are analogous to a coupled-oscillator system, in which the two oscillators are grid cells, described by the attractor phase, and landmark inputs, described by the landmark phase. Coupled-oscillator systems are well studied in physics and provide a clarifying analogy for the cue-integration process here.

 

Here they found principled regimes under which behaviourally measured position estimates and MEC codes differentially weight the influence of visual landmark and self-motion cues.

First, they found that conflicts between locomotion and visual cues caused grid cells to remap in an asymmetric manner, with gain increases causing phase shifts and gain decreases causing grid scale changes. This asymmetry was mirrored by multiple MEC speed signals.

Second, they developed a coupled-oscillator attractor model that explained how grid responses to gain manipulations could arise from competition between conflicting self-motion and landmark cues. This model successfully predicted grid responses to an intermediate gain change.

Finally, they used a path integration task to demonstrate a behavioural asymmetry in the weighting of visual versus locomotor cues that matched grid and speed responses.

Taken together, these findings provide a framework for understanding the dynamics of cue combination in MEC neural codes and navigational behaviour. This framework could be useful in interpreting grid cell responses to different environmental geometries, in which distortion, shearing, spatial frequency changes or remapping could reflect competition between landmark and self-motion inputs or context or experience-dependent changes in these inputs.

The ability of the path integration system to operate in both subcritical and supercritical regimes likely serves an adaptive purpose during navigation. For example, the subcritical regime is appropriate when landmark input is close enough to path integration to be used for error correction. However, if landmark change location or become unreliable, creating a large disagreement between landmark input and path integration, the network can enter the supercritical regime and pull free from the influence of landmarks. The decoherence threshold could therefore reflect the animal’s expectations about the reliability of landmark input. This idea that nonlinear cue integration serves an adaptive purpose during navigation may be a more general principle of parahippocampal computation. Recent work used VR gain changes to show that hippocampal place cells integrate visual and locomotor information nonlinearly. These data strongly resemble the subcritical regime of their model, raising the possibility that some of the principles they reveal governing the integration of different information sources by both MEC neural codes and behaviour may generalize to other brain regions that support navigation.

 

For further info, please read the paper Campbell et al., 2018.

Campbell, Malcolm G., Samuel A. Ocko, Caitlin S. Mallory, Isabel I. C. Low, Surya Ganguli & Lisa M. Giocomo. Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation. Nature Neuroscience, volume 21, pages1096–1106 (2018).

 

There are some relevant work in robotic navigation combining visual cues and self-motion cues, such as RatSLAM. This framework model will give us many inspirations for enabling robots to navigate autonomously.

How to expand the RatSLAM model to adapt visual and locomotor information nonlinearly in changing environments inspired by the framework model in Campbell et al., 2018?

 

Some works about RatSLAM as following links.

How to perform Path Integration in RatSLAM?

How to represent robot’s pose with a rate-coded neural network CAN in RatSLAM?

How does the velocity take effects on movement of activity of Pose Cells in RatSLAM?

How to update the activity of pose cells in RatSLAM?

How Self-Motion Updates the Head Direction Cell Attractor?

How to perform robot place recognition with multi-scale, multi-sensor system inspired by place cells?

How to enable robot cognitive mapping inspired by Grid Cells, Head Direction Cells and Speed Cells?

How to implement internal dynamics of the head direction network in brain inspired 1D SLAM?

Continuous Attractor Neural Network (CANN) and 1D CANN for Head Direction