How to enable robot cognitive mapping inspired by Grid Cells, Head Direction Cells and Speed Cells?

Taiping Zeng, and Bailu Si. “Cognitive Mapping Based on Conjunctive Representations of Space and Movement.” Frontiers in Neurorobotics 11 (2017).

In this work, the researchers developed a cognitive mapping model for mobile robots, taking advantage of the coding strategies of the spatial memory circuits in mammalian brains. The key components of the proposed model include HD cells, conjunctive grid cells, and local view cells. Both HD cells and conjunctive grid cells are modelled by continuous attractor networks that operate on the same principles.

More specifically, HD cells in the model represent arbitrary conjunctions of head directions and rotations of the animals. Due to the asymmetric recurrent connections and the network dynamics, intrinsically rotating patterns spontaneously emerge in the network. Angular velocity inputs to the network pick up patterns rotating with appropriate velocities and track the head direction of the robot.

Similarly, conjunctive grid cells in the model encode conjunctions of positions and translations in the two dimensional environment.

The inputs from HD cells and speed cells activate a subset of the conjunctive grid cells that produce triangular patterns moving intrinsically in the neural tissue with velocity proportional to the running velocity of the robot.

Both HD cells and conjunctive grid cells get inputs from local view cells, which are activated once the robot moves to a familiar scene and which provide anchoring cues.

The major contributions in this paper are twofold.

First, the neural network framework presented in this paper is based on recent experimental studies on the hippocampal-entorhinal circuits and aims to model layer III and deep layers of the MEC and the hippocampus. In the system, conjunctive HD-by-velocity cells and conjunctive grid-by-velocity cells work hand in hand to integrate movement and sensory information and build a large-scale map.

Second, the neural dynamics of the hippocampal-entorhinal circuits is modelled. Provided with the inputs from local view cells, the neural dynamics of the system functions as a general mechanism for error correction and pattern completion.

Figure 1 The neural network architecture of the model and the diagram of information flow. The HD-by-velocity cells, updated by angular velocity, represent head directions. The grid-by-velocity cells receiving translational velocity, converted from linear speed and the head direction representations, provide positional representation, which is in turn utilized to build a cognitive map. The activities of the HD-by-velocity cells and the grid-by-velocity cells are encoded by heat maps with red for high activity and blue for no activity.

 

In the model of RatSLAM, pose cells were developed to encode positions in large-scale environment for long-term robotic navigation tasks. Pose cell is a kind of abstract cell, which updates its activity by displacing a copy of the activity packet, rather that performing path integration according to the dynamics of the network.

There is a sparse number of robot navigation system, which use grid cell networks to do path integration and represent the pose of a robot.

The theory of neural dynamics has been extensively adopted to model brain functions, such as memory, navigation, and sensory integration. The theory of neural dynamics has become popular in robotics to enhance the cognitive abilities of robots, such as obstacle avoidance, coordinated path tracking, motion tracking, and grasping.

For further more info, please read the Zeng et al., 2017 paper.

Zeng, Taiping, and Bailu Si. “Cognitive Mapping Based on Conjunctive Representations of Space and Movement.” Frontiers in Neurorobotics 11 (2017).