Recurrent Neural Network (RNN) models of the brain
Scientific career
Fields
Computational and Theoretical Neuroscience
Institutions
Harvard University
Kanaka Rajan is a computational neuroscientist in the Department of Neurobiology at Harvard Medical School and founding faculty in the Kempner Institute for the Study of Natural and Artificial Intelligence[1] at Harvard University.[2] Rajan trained in engineering, biophysics, and neuroscience, and has pioneered novel methods and models to understand how the brain processes sensory information. Her research seeks to understand how important cognitive functions — such as learning, remembering, and deciding — emerge from the cooperative activity of multi-scale neural processes, and how those processes are affected by various neuropsychiatric disease states. The resulting integrative theories about the brain bridge neurobiology and artificial intelligence.
Early life and education
Rajan was born and raised in India. She completed a Bachelors of Technology (B.Tech.) from the Center for Biotechnology at Anna University in Tamil Nadu, India in 2000, majoring in Industrial Biotechnology and graduating with distinction.[3][4]
In 2002, Rajan pursued a post-graduate degree in neuroscience at Brandeis University, where she did experimental rotations with Eve Marder and Gina G. Turrigiano, before joining Larry Abbott's laboratory where she completed her master's degree (MA).[3] In 2005 she transferred to the Ph.D. program in Neuroscience at Columbia University when Dr. Abbott moved from Brandeis to Columbia, and began her Ph.D. with Abbott at the Center for Theoretical Neuroscience.[5]
Doctoral research
In Rajan's graduate work, she used mathematical modelling to address neurobiological questions.[6] The main component of her thesis was the development of a theory for how the brain interprets subtle sensory cues within the context of its internal experiential and motivational state to extract unambiguous representations of the external world.[7] This line of work focused on the mathematical analysis of neural networks containing excitatory and inhibitory types to model neurons and their synaptic connections. Her work showed that increasing the widths of the distributions of excitatory and inhibitory synaptic strengths dramatically changes the eigenvalue distributions.[8] In a biological context, these findings suggest that having a variety of cell types with different distributions of synaptic strength would impact network dynamics and that synaptic strength distributions can be measured to probe the characteristics of network dynamics.[8]Electrophysiology and imaging studies in many brain regions have since validated the predictions of this phase transition hypothesis.
To do this work, powerful methods from random matrix theory[8] and statistical mechanics[9] were employed. Rajan's early, influential work[10] with Abbott and Haim Sompolinsky integrated physics methodology into mainstream neuroscience research — initially by creating experimentally verifiable predictions, and today by cementing these tools as an essential component of the data modelling arsenal. Rajan completed her Ph.D. in 2009.[3]
Postdoctoral research
From 2010 to 2018, Rajan worked as a postdoctoral research fellow at Princeton University with theoretical biophysicist William Bialek and neuroscientist David W. Tank.[11] At Princeton, she and her colleagues developed and employed a broad set of tools from physics, engineering, and computer science to build new conceptual frameworks for describing the relationship between cognitive processes and biophysics across many scales of biological organization.[12]
Modelling feature selectivity
In Rajan's postdoctoral work with Bialek, she explored an innovative method for modelling the neural phenomenon of feature selectivity.[13] Feature selectivity is the idea that neurons are tuned to respond to specific and discrete components of the incoming sensory information, and later these individual components are merged to generate an overall perception of the sensory landscape.[13] To understand how the brain might receive complex inputs but detect individual features, Rajan treated the problem like a dimensionality reduction instead of the typical linear model approach.[13] Rajan showed, using quadratic forms as features of a stimulus, that the maximally informative variables can be found without prior assumptions of their characteristics.[13] This approach allows for unbiased estimates of the receptive fields for stimuli.[13]
Recurrent neural network modelling
Rajan then worked with David Tank to show that sequential activation of neurons, a common feature in working memory and decision making, can be demonstrated when starting from neural network models with random connectivity.[14] The process, termed “Partial In-Network Training”, is used as both model and to match real neural data from the posterior parietal cortex during behavior.[14] Rather than feedforward connections, the neural sequences in their model propagate through the network via recurrent synaptic interactions as well as being guided by external inputs.[14] Their modelling highlighted the potential that learning can derive from highly unstructured network architectures.[14] This work uncovered how sensitivity to natural stimuli arises in neurons, how this selectivity influences sensorimotor learning, and how the neural sequences observed in different brain regions arise from minimally plastic, largely disordered circuits – published in Neuron.[14]
Career and research
In June 2018, Rajan became an assistant professor in the Department of Neuroscience and the Friedman Brain Institute at the Icahn School of Medicine at Mount Sinai. As the Principal Investigator of the Rajan Lab for Brain Research and AI in NY (BRAINY),[15] her work focuses on integrative theories to describe how behavior emerges from the cooperative activity of multi-scale neural processes. To gain insight into fundamental brain processes such as learning, memory, multitasking, or reasoning, Rajan develops theories based on neural network architectures inspired by biology as well as mathematical and computational frameworks that are often used to extract information from neural and behavioral data.[16] These theories use neural network models flexible enough to accommodate various levels of biological detail at the neuronal, synaptic, and circuit levels.
She uses a cross-disciplinary approach that provides critical insights into how neural circuits learn and execute functions, ranging from working memory to decision making, reasoning, and intuition, putting her in a unique position to advance our understanding of how important acts of cognition work.[17] Her models are based on experimental data (e.g., calcium imaging, electrophysiology, and behavior experiments) and on new and existing mathematical and computational frameworks derived from machine learning and statistical physics.[16] Rajan continues to apply recurrent neural network modelling to behavioral and neural data. In collaboration with Karl Deisseroth and his team at Stanford University,[18] such models revealed that circuit interactions within the lateral habenula, a brain structure implicated in aversion, were encoding experience features to guide the behavioral transition from active to passive coping – work published in Cell.[19][20]
In 2022, Rajan was promoted to Associate Professor[25] with tenure in the Department of Neuroscience and the Friedman Brain Institute at the Icahn School of Medicine at Mount Sinai.
Visiting Research Fellowship, Janelia Research Campus, Howard Hughes Medical Institute (2016)
Brain and Behavior Foundation (formerly, NARSAD) Young Investigator Award (2015-2017)
Lectureship, Department of Molecular Biology and the Lewis-Sigler Institute for Integrative Genomics, Princeton University for Methods and Logic in Quantitative Biology (2011-2013)
Grant from the Organization for Computational Neurosciences (OCNS) (2011)