Award Abstract # 1552868
CAREER: Distributed Nonlinear Neural Computation

NSF Org: IOS
Division Of Integrative Organismal Systems
Recipient: BAYLOR COLLEGE OF MEDICINE
Initial Amendment Date: May 25, 2016
Latest Amendment Date: October 15, 2020
Award Number: 1552868
Award Instrument: Continuing Grant
Program Manager: Quentin Gaudry
IOS
 Division Of Integrative Organismal Systems
BIO
 Directorate for Biological Sciences
Start Date: June 1, 2016
End Date: May 31, 2022 (Estimated)
Total Intended Award Amount: $591,195.00
Total Awarded Amount to Date: $591,195.00
Funds Obligated to Date: FY 2016 = $239,721.00
FY 2018 = $115,573.00

FY 2019 = $120,328.00

FY 2020 = $115,573.00
History of Investigator:
  • Xaq Pitkow (Principal Investigator)
    xaq@cmu.edu
Recipient Sponsored Research Office: Baylor College of Medicine
1 BAYLOR PLZ
HOUSTON
TX  US  77030-3411
(713)798-1297
Sponsor Congressional District: 09
Primary Place of Performance: Baylor College of Medicine
One Baylor Plaza
Houston
TX  US  77030-3411
Primary Place of Performance
Congressional District:
09
Unique Entity Identifier (UEI): FXKMA43NTV21
Parent UEI: FXKMA43NTV21
NSF Program(s): STATISTICS,
Cross-BIO Activities,
MSPA-INTERDISCIPLINARY,
Activation
Primary Program Source: 01001617DB NSF RESEARCH & RELATED ACTIVIT
01001819DB NSF RESEARCH & RELATED ACTIVIT

01001920DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 1096, 8007, 8091, 9178, 9179
Program Element Code(s): 126900, 727500, 745400, 771300
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.074

ABSTRACT

This project aims to understand how large populations of neurons transform their encoded information to drive behaviors meaningful to the organism. This will be accomplished in two ways. First, the research will derive new analysis methods that experimentalists can use to interpret neural data from naturalistic tasks of moderate complexity. Second, by the project will create a broadly applicable computational framework for synthesizing these analyses into a theory of probabilistic neural computation. Both of these components are informed by three basic principles: information in the brain is distributed across many neurons, sensory evidence is weighted by its reliability, and neural computation occurs in multiple stages. Current analyses that connect animal behavior to neural activities apply to tasks that are so simple that an animal would not actually need a brain to solve them: the same computations could be accomplished in a single step by wiring the sensory organs directly to the muscles. Clearly there is a need to study more complex tasks that require multi-step computations, and the proposed research will provide the rigorous statistical foundation needed to analyze data from such studies. The research will also have a broader educational impact by creating interactive teaching games that explain concepts needed for thinking about big neuroscience data.

The long-term goal of this research program is to explain brain function by constructing quantitative theories of how distributed nonlinear neural computation implements principles of statistical reasoning. To accomplish this goal, this project will create a normative theory for what information about naturalistic tasks should be encoded in neural populations, and data analyses that can reveal which aspects of that information are actually decoded. The normative theory is based on probabilistic population codes, a model in which large-scale neural activity patterns encode not just estimates of a stimulus, but also the reliability of those estimates. This model is currently applied only to small-scale inference problems, and one aim of this project is to extend this model by constructing biologically plausible network models for complex naturalistic tasks involving many interacting variables. The key components of this model, and indeed of any model of naturalistic computation, are nonlinear operations. To determine whether the posited nonlinear computations occur in a real brain, the other aim of the project is to derive a statistical analysis technique centered on a novel generalization of standard choice-related activity, termed nonlinear choice correlation. By combining this measure with estimates of neural correlations, experimentalists will be able to infer the class of distributed nonlinear computations the brain uses from simultaneous recording of neural activity and animal behavior.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 13)
Alefantis, Panos and Lakshminarasimhan, Kaushik and Avila, Eric and Noel, Jean-Paul and Pitkow, Xaq and Angelaki, Dora E "Sensory evidence accumulation using optic flow in a naturalistic navigation task" Journal of Neuroscience , v.42 , 2022 , p.5451--546 10.1523/JNEUROSCI.2203-21.2022
Forseth, KJ and Pitkow, X and Fischer-Baum, S and Tandon, N "What The Brain Does As We Speak" bioRxiv , 2021 10.1101/2021.02.05.429841
Giahi-Saravani A, Forseth, KJ and Tandon, N and Pitkow, X "Dynamic Brain Interactions during Picture Naming" eNeuro , 2019 10.1523/eneuro.0472-18.2019
Giahi Saravani A, Forseth K, Tandon N, Pitkow X "Dynamic Brain Interactions during Picture Naming" eNeuro , v.6 , 2019 , p.0472-18.2 10.1523/ENEURO.0472-18.2019
Kim, Juhyeon and Orhan, Emin and Yoon, Kijung and Pitkow, Xaq "Two-argument activation functions learn soft XOR operations like cortical neurons" IEEE Access , 2022 10.1109/ACCESS.2022.3178951
Kwon M, Daptardar S, Schrater P, Pitkow X "Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics" Advances in Neural Information Processing Systems , v.2020 , 2020
Sinz F, Pitkow X, Reimer J, Bethge M, Tolias A "Engineering a less artificial intelligence" Neuron , v.103 , 2019 , p.967 10.1016/j.neuron.2019.08.034
Stavropoulos, Akis and Lakshminarasimhan, Kaushik J and Laurens, Jean and Pitkow, Xaq and Angelaki, Dora E "Influence of sensory modality and control dynamics on human path integration" eLife , v.11 , 2022 , p.e63405 10.7554/eLife.63405
Walker EY, Sinz FH, Froudarakis E, Fahey PG, Muhammad T, Ecker AS, Cobos E, Reimer J, Pitkow X, Tolias AS "Inception in visual cortex: in vivo-silico loops reveal most exciting images." Nature Neuroscience , 2019
Wu Z, Kwon M, Daptardar S, Schrater P, Pitkow X "Rational thoughts in neural codes" PNAS , v.117 , 2020 , p.29311 doi.org/10.1073/pnas.1912336117
Xaq Pitkow* and Dora E. Angelaki "Inference in the Brain: Statistics Flowing in Redundant Population Codes" Neuron , v.94 , 2017 , p.943
(Showing: 1 - 10 of 13)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This project created theories of how the brain could represent the world and use that representation for behaving. It focused on two key principles of representation: uncertainty and nonlinearity. For the first topic, uncertainty, we begin with the premise that animals receive lots of sensory data, but not all of it is equally reliable. For a familiar example, it’s usually harder to hear where a sound comes from than to see the thing that makes the sound. For the brain to adjust how much to trust different sensory inputs compared to its mental predictions, it must somehow represent the reliabilities of all of these factors. We developed and contrasted theories of how brain activity might encode these reliabilities, which we view through the lens of probability and statistics. This allowed us to address a major challenge, namely how those probabilities could account for causal connections between variables. For this we described brain implementations of a mathematical framework called probabilistic graphical models, and showed how this could avoid a scaling catastrophe that previous work claimed would plague some theories about brain computations.

            The second topic, nonlinearity, is a core ingredient in modern algorithms for machine intelligence. Nonlinearity refers to “folding” signals so that related information can be processed similarly. This is like folding and cutting a paper snowflake, which can make complicated shapes out of simple operations. In the brain and in machines, this approach can be used to find complicated data patterns using simple hardware. We developed methods to understand what nonlinearities the brain actually uses, by analyzing how neural data relates to both the brain’s sensory inputs and the behavioral outputs. We used this approach to analyze neural recordings from monkey brains, and showed that some animals actually use most of the information their brain has about some tasks.

            These two topics are also closely connected, because representing and using uncertainty generally requires nonlinear computation. Within this project, we’ve connected these ideas to understand abstract theories and concrete analyses of experiments. On the abstract side, we developed theories about when the brain should try to suppress predicted inputs; theories about how to bridge between two prominent, but quite distinct, machine learning algorithms to get the best of both; and theories about how we incorporate more of the most basic nonlinearities of biological neurons into richer units within artificial neural networks. On the concrete side, we analyzed neural data to discover: how uncertainties affect monkey navigation in virtual reality; which nonlinear combinations of visual inputs best activate neurons in the visual cortex of mice; and how brain states shift in human brains while people speak.

            We also used resources from this grant to develop and disseminate educational material. The largest impact has been my efforts to service on the Executive Committee that led the Neuromatch Academy in 2020. This was a massive online summer school in computational neuroscience that taught nearly 7000 students from over 100 countries, with live instruction in 13 languages, and prerecorded content captioned in three languages. This intense effort was organized rapidly over three months to partially repair the educational disruption caused by the COVID-19 pandemic, as all summer schools were canceled and those formative experiences were lost. I led multiple teams of people by chairing the Projects committee and through contributions to the Curriculum, Professional Development, Diversity, and Mentoring Committees. All of the course materials will remain free online for anyone. I continued this engagement in 2021–2 to improve the course materials and interactivity. Overall we established a new standard for delivering high quality remote interactive content with a focus on inclusivity, making lemonade out of pandemic lemons.

 


Last Modified: 10/14/2022
Modified by: Xaq Pitkow

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page