Award Abstract # 1845166
CAREER: Extracting principles of neural computation from large scale neural recordings through neural network theory and high dimensional statistics

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: THE LELAND STANFORD JUNIOR UNIVERSITY
Initial Amendment Date: August 30, 2019
Latest Amendment Date: August 30, 2019
Award Number: 1845166
Award Instrument: Standard Grant
Program Manager: Kenneth Whang
kwhang@nsf.gov
 (703)292-5149
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2019
End Date: September 30, 2025 (Estimated)
Total Intended Award Amount: $500,000.00
Total Awarded Amount to Date: $500,000.00
Funds Obligated to Date: FY 2019 = $500,000.00
History of Investigator:
  • Surya Ganguli (Principal Investigator)
    sganguli@stanford.edu
Recipient Sponsored Research Office: Stanford University
450 JANE STANFORD WAY
STANFORD
CA  US  94305-2004
(650)723-2300
Sponsor Congressional District: 16
Primary Place of Performance: Stanford University
318 Campus Dr., S244
Stanford
CA  US  94305-7464
Primary Place of Performance
Congressional District:
Unique Entity Identifier (UEI): HJD6G4D6TJY5
Parent UEI:
NSF Program(s): Robust Intelligence
Primary Program Source: 01001920DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 7495, 8089
Program Element Code(s): 749500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Recent technological advances now enable recordings of thousands of neurons during complex behaviors. Such experimental capabilities could potentially reveal how the brain encodes sensations, forms memories, learns tasks, makes decisions, and generates motor actions. However, there exist major obstacles to attaining a scientific understanding of how the psychological capabilities of the mind emerge from the biological wetware of the brain. First, data analytic methods are not adequate to make sense of the massive datasets currently being gathered from the brain. Second, theoretical methods are not adequate for both optimally designing large-scale neural recordings, and bridging scales from the collective biophysics of many neurons to psychological processes underlying sensations, thoughts and actions. This project will develop novel data analytic and theoretical methods to extract a conceptual understanding of how the brain gives rise to cognition. These methods will be tested in large-scale recordings from many experimental labs studying perception, memory, learning, decision making and motor control. They will also be applied to developing better learning protocols and neural prosthetic devices.

This project will pursue three overarching aims. It will build on advances in high dimensional statistics to develop a theory of when and how subsets of neurons reflect the collective dynamics of the much larger unobserved circuit in which they are embedded. This theory will provide quantitative guidance for the efficient design of future large-scale recording experiments. Second, it will build on advances in deep learning to develop algorithmic methods for extracting a conceptual understanding of how complex neural networks solve tasks. These algorithmic methods will elucidate which aspects of network connectivity and dynamics are essential to understanding how neural circuits perform their computations, thereby providing guidance for what to measure in future neuroscience experiments. Finally, it will advance theories of neural network learning to better understand how the structure of prior experience determines learned neural connectivity, and how this learning process can be optimized. These general theoretical advances will be refined and tested in specific, close experimental collaborations, involving: identifying feedback control laws in motor cortex, finding signatures of attractor dynamics in the hippocampal memory circuits, understanding the neural algorithms for perception in the retina and decision making in prefrontal cortex, and developing frameworks for understanding rapid rodent learning built upon prior experiences.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 37)
Bahri, Yasaman and Kadmon, Jonathan and Pennington, Jeffrey and Schoenholz, Sam S. and Sohl-Dickstein, Jascha and Ganguli, Surya "Statistical Mechanics of Deep Learning" Annual Review of Condensed Matter Physics , v.11 , 2020 https://doi.org/10.1146/annurev-conmatphys-031119-050745 Citation Details
Campbell, Malcolm G and Attinger, Alexander and Ocko, Samuel A and Ganguli, Surya and Giocomo, Lisa M "Distance-tuned neurons drive specialized path integration calculations in medial entorhinal cortex" Cell Reports , v.36 , 2021 https://doi.org/10.1016/j.celrep.2021.109669 Citation Details
Ebrahimi, Sadegh and Lecoq, Jérôme and Rumyantsev, Oleg and Tasci, Tugce and Zhang, Yanping and Irimia, Cristina and Li, Jane and Ganguli, Surya and Schnitzer, Mark J "Emergent reliability in sensory cortical coding and inter-area communication" Nature , v.605 , 2022 https://doi.org/10.1038/s41586-022-04724-y Citation Details
Fort, S and Dziugaite, G.K. and Paul, M. and Kharaghani, S. and Roy, D.M. and Ganguli, S. "Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel" Advances in neural information processing systems , v.33 , 2020 Citation Details
Ganguli, Surya "Measuring the dimensionality of behavior" Proceedings of the National Academy of Sciences , v.119 , 2022 https://doi.org/10.1073/pnas.2205791119 Citation Details
Gupta, Agrim and Fan, Linxi and Ganguli, Surya and Fei-Fei, Li "Metamorph: learning universal controllers with transformers" International Conference on Learning Representations , 2022 Citation Details
Harvey, Sarah E and Lahiri, Subhaneil and Ganguli, Surya "Universal energy-accuracy tradeoffs in nonequilibrium cellular sensing" Physical Review E , v.108 , 2023 https://doi.org/10.1103/PhysRevE.108.014403 Citation Details
Hazon, Omer and Minces, Victor H and Tomàs, David P and Ganguli, Surya and Schnitzer, Mark J and Jercog, Pablo E "Noise correlations in neural ensemble activity limit the accuracy of hippocampal spatial representations" Nature Communications , v.13 , 2022 https://doi.org/10.1038/s41467-022-31254-y Citation Details
Kadmon, J. and Timcheck, J. and Ganguli, S "Predictive coding in balanced neural networks with noise, chaos and delays" Advances in neural information processing systems , v.33 , 2020 Citation Details
Kunin, D. and Nayebi, A and Javier, S. and Ganguli, S and Bloom, J. and Yamins, D. "Two Routes to Scalable Credit Assignment without Weight Symmetry, International Conference on Machine" Proceedings of Machine Learning Research , v.37 , 2020 Citation Details
Maheswaranathan, N. and Williams, A and Golub, M and Ganguli, S and Sussillo, D "Universality and individuality in neural dynamics across large populations of recurrent networks" Advances in neural information processing systems , v.32 , 2019 Citation Details
(Showing: 1 - 10 of 37)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page