Development of a comprehensive Computational model of multi-sensory integration in the Hippocampal spatial cell network
Implementing Organization
Indian Institute of Technology (Madras)
Principal Investigator
Prof. Srinivasa Chakravarthy
Indian Institute of Technology (Madras)
CO-Principal Investigator
Dr. Ayan Mukhopadhyay
Indian Institute of Technology (IIT)
About
Navigation is an important skill for animals to survive. The hippocampal formation in the temporal lobe of the brain acts as a natural GPS system. Different neurons in hippocampal formation have different spatial functions to facilitate navigation. For example, ‘place cells' in the CA1 region of hippocampus codes for current location, ‘grid cells' in entorhinal cortex (EC) help in path integration (PI), ‘head direction (HD) cells' in subiculum region of hippocampus codes for current head direction of the animal. Although path integration (PI) forms the basis for navigation, there are other sensory inputs too which plays an important role in navigation like vision, vestibular inputs, auditory inputs etc. There are a number of pre-existing modelling studies which use PI to model these spatial cells (1)(2)(3) but do not account for other sensory inputs due to limitation in modelling techniques. The recent development of Deep Neural Networks (DNNs) matches with the human performance in certain perceptual classification tasks. In the visual domain, for example, DNNs trained on visual object recognition matched even human error patterns across object classes (4)(5), variation of viewpoint (6), shape (7), and judgement of object similarity (8). In the auditory domain also, DNNs trained on speech and music recognition closely match human performance (9)(10). Therefore, we propose to develop a general, deep learning - based modeling framework that describes the emergence of spatial cell responses and can also explain behavioral responses that involve a combination of PI and other sensory information.