How Self-Motion Updates the Head Direction Cell Attractor?

Laurens J., et al., 2018 reviews head direction cells. They propose a quantitative framework whereby this drive represents a multisensory self-motion estimate computed through an internal model that uses sensory prediction errors of vestibular, visual, and somatosensory cues to improve on-line motor drive.

I have implemented a brain-inspired head direction system (1D SLAM) for robotic direction control in past period of time, which is based on the head direction cells model in neuroscience. I find some problems and novel ideas about brain-inspired head direction system (1D SLAM). Fortunately, the latest paper (Laurens J., et al., 2018) reviews HD cells, which give me some useful inspiration.

Laurens, Jean, and Dora Angelaki. “The brain compass: a perspective on how self-motion updates the head direction cell attractor.” Neuron,Volume 97, Issue 2, p275–289, 17 January 2018.

There are many good questions as follows in this paper, which is very helpful for building brain inspired navigation system for robots.

How a neuronal attractors works?

How vestibular cues and efference motor copies are integrated during voluntary, self-generated head movements?

How HD networks encode head orientation in three dimensions (3D)?

How gravity influences azimuth coding?

How and why the animal’s orientation relative to gravity influences the properties of HD cells, including the loss or reversal of HD tuning in upside-down orientation?

How to update the visual cues?

How multisensory self-motion cues update the HD attractor?

How multipe signals  are combined to construct an appropriate multisensory self-motion estimate under all conditions (during active motion as well as during passive motion, in light, and in darkness) that then updates the HD cell attractor?

Whether self-motion velocity signals that update the HD attractor are represented in egocentric or allocentric reference frames?

Can the HD system represent all 3D of head orientation in space, or is it just an one-dimensional (tilted) azimuth compass that can maintain its allocentric reference?

The HD system may depend on gravity even more, as ti may monitor the animal’s 3D orientation in the world. Is there a 3D compass and does it depend on Gravity?

Note that encoding 3D head orientation in a 3D attractor would raise a “combinatory explosion” issue (because of the large number of cells required to represent 3D orientations, and the large number of connections required to encode all possible rotations from one orientation to another). One solution to avoid this mathematical complexity is to encode head azimuth independently from the two other degrees of freedom and use gravity to define these remaining degrees of freedom, i.e., vertical head tilt orientation. In fact, since correctly updating the azimuth HD ring attractor requires knowledge of orientation relative to gravity, this solution does not require much additional computation or neural hardware.

Whether tilt signals are processed by a neuronal attractor, similiar to azimuth velocity cues?

……

For further more info, please read the paper.

Laurens, Jean, and Dora Angelaki. “The brain compass: a perspective on how self-motion updates the head direction cell attractor.” Neuron,Volume 97, Issue 2, p275–289, 17 January 2018.