How to represent location based on the place cells?

The excerpt note is about a model of spatial location based on the place cells from the Michael 2008.

Michael Milford. Robot Navigation from Nature: Simultaneous Localisation, Mapping, and Path Planning Based on Hippocampal Models. Springer-Verlag Berlin Heidelberg Press, pp. 74-79, 2008.

The place cells are modelled as a two-dimensional matrix of cells, with each cell tuned to be maximally activated when the robot is at a specific location. A coarse representation is used, with the path integration system tuned so that each place cell represents a physical area of approximatedly 250 mm by 250 mm.

To provide the appropriate system dynamics the model uses a competitive attractor network arrangement similar to that used for the head direction network.

The cells are arranged in a two-dimensional matrix with full excitational interconnectivity between all cells. The excitational weights are created using a two-dimensional version of the discrete Gaussian distribution. This ensures the activation of each place cell reduces as the robot moves away from the cell’s preferred location. One full iteration of the place CAN consists of the same five steps used in a head direction CAN iteration, with orientation replaced by location.

The main difference between the place cell and head direction networks is that there is no wraparound for the place network – cells at one edge of the matrix do not project strongly to cells at the opposite edge of the matrix (they do project very weakly to each other because each cell excites every other to some degree). To minimise boundary effects a buffer layer of cells is used around the edges of the place cell matrix; these cells can be active and affect cells in the matrix proper, but do not themselves have any spatial correspondence to the physical world.

Path Integration Using Ideothetic Information

The current activity packet in the place network is projected to its anticipated future position based on the velocity and orientation of the robot. Fig.1 shows the overall path integration process. The robot orientation is not usually known exactly – rather it is represented by a number of activated head direction cells. The ideothetic input is processed on an individual basis for each head direction cell and its associated activity level. The place cell activity after path integration, , is given by:

The activity levels in each of four cells are multiplied by a fractional function that is dependent on velocity , which is itself a function of head direction cell index and the absolute velocity of the robot . The four cells are selected based on the offsets and which are function of . The sum of this activity is then multiplied by the activity level in each head direction cell. This process is represented for all head direction cells, with the sum of its output injected into the place cells as ideothetic input. The coarseness of the place cell representation reduces the value of explicitly representing the uncertainty in the robot’s translational velocity with a probability distribution, as is often done with particle filters.

Fig 1. Path integration in the place cells. Current place cell activity is updated under path integration by orientation information stored in the head direction network and translational velocity information.

Fig.2 Sample activity profiles. (a) Activity in a 51 by 51 place grid. Total cell activity is normalised, so cell activity can be though of as a rudimentary probability of being located at the cell’s corresponding physical location. (b) The activity caused by visual input for the same place cell grid (before scaling). From a vision perspective there is equal probability of being located in two different locations.

Fig.3 View cell-place cell weight map. Lighter shading indicates stronger connections from the view cell to the place cell. When activated this local view cell injects activity strongly into one area of the place cells and weakly into several other areas.

For further more info, please read Michael 2008.

Michael Milford. Robot Navigation from Nature: Simultaneous Localisation, Mapping, and Path Planning Based on Hippocampal Models. Springer-Verlag Berlin Heidelberg Press, pp. 74-79, 2008.