How does the brain encode abstract state-space representations in high-dimensional environments?

Cross L, Cockburn J, Yue Y, O’Doherty JP. Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments. Neuron. 2020 Dec 7:S0896-6273(20)30899-0. doi: 10.1016/j.neuron.2020.11.021.

In Brief
Cross et al. scanned humans playing Atari games and utilized a deep reinforcement learning algorithm as a model for how humans can map high-dimensional sensory inputs in actions. Representations in the intermediate layers of the algorithm were used to predict behavior and neural activity throughout a sensorimotor pathway.”

Summary
Humans possess an exceptional aptitude to efficiently make decisions from high-dimensional sensory observations. However, it is unknown how the brain compactly represents the current state of the environment to guide this process. The deep Q-network (DQN) achieves this by capturing highly nonlinear mappings from multivariate inputs to the values of potential actions. We deployed DQN as a model of brain activity and behavior in participants playing three Atari video games during fMRI. Hidden layers of DQN exhibited a striking resemblance to voxel activity in a distributed sensorimotor network, extending throughout the dorsal visual pathway into posterior parietal cortex. Neural state-space representations emerged from nonlinear transformations of the pixel space bridging perception to action and reward. These transformations reshape axes to reflect relevant high-level features and strip away information about task-irrelevant sensory features. Our findings shed light on the neural encoding of task representations for decision-making in real-world situations.”

Cross L, Cockburn J, Yue Y, O’Doherty JP. Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments. Neuron. 2020 Dec 7:S0896-6273(20)30899-0. doi: 10.1016/j.neuron.2020.11.021.