How a simple robotics model of mammal navigation is useful to interpret neurobiological recordings

Place recognition is a complex process involving idiothetic and allothetic information. In mammals, evidence suggests that visual information stemming from the temporal and parietal cortical areas (‘what’ and ‘where’ information) is merged at the level of the entorhinal cortex (EC) to build a compact code of a place.Local views extracted from specific feature points can provide information important for view cells (in primates) and place cells (in rodents) even when the environment changes dramatically. Robotics experiments using conjunctive cells merging ‘what’ and ‘where’ information related to different local views show their important role for obtaining place cells with strong generalization capabilities.

Fig. Visual place cell from the merging of ‘what’ and ‘where’ information. (The figure is from the Gaussier et al. 2019.)

In the paper Gaussier et al. 2019, the authors show how a simple robotics model of mammal navigation is useful to interpret neurobiological recordings. They question the current models of the dMEC as a path integrator. Instead, they propose that the EC is a generic merging tool that builds a compact representation of the cortical activity. They summarize experiments and simulations showing that grid cells related to PI could be explained as a modulo projection of cortical activity computed in the RSC, where PI could take place. Furthermore, they suggest that the visual grid cells recorded in the human EC could also be explained by the same mechanism. 

For further info, please read the paper  Gaussier et al. 2019.

Philippe Gaussier, Jean Paul Banquet, Nicolas Cuperlier, Mathias Quoy, Lise Aubin, Pierre-Yves Jacob, Francesca Sargolini, Etienne Save, Jeffrey L. Krichmar, Bruno Poucet. Merging information in the entorhinal cortex: what can we learn from robotics experiments and modeling? Journal of Experimental Biology 2019 222: jeb186932 doi: 10.1242/jeb.186932 Published 6 February 2019