Award Abstract # 2223725
EFRI BRAID: Using Proto-Object Based Saliency Inspired By Cortical Local Circuits to Limit the Hypothesis Space for Deep Learning Models

NSF Org: EFMA
Office of Emerging Frontiers in Research and Innovation (EFRI)
Recipient: THE JOHNS HOPKINS UNIVERSITY
Initial Amendment Date: September 16, 2022
Latest Amendment Date: September 16, 2022
Award Number: 2223725
Award Instrument: Standard Grant
Program Manager: Jordan Berg
jberg@nsf.gov
 (703)292-5365
EFMA
 Office of Emerging Frontiers in Research and Innovation (EFRI)
ENG
 Directorate for Engineering
Start Date: September 1, 2022
End Date: August 31, 2026 (Estimated)
Total Intended Award Amount: $1,999,112.00
Total Awarded Amount to Date: $1,999,112.00
Funds Obligated to Date: FY 2022 = $1,999,112.00
History of Investigator:
  • Ralph Etienne-Cummings (Principal Investigator)
    retienne@jhu.edu
  • Andreas Andreou (Co-Principal Investigator)
  • Ernst Niebur (Co-Principal Investigator)
  • Stefan Mihalas (Co-Principal Investigator)
Recipient Sponsored Research Office: Johns Hopkins University
3400 N CHARLES ST
BALTIMORE
MD  US  21218-2608
(443)997-1898
Sponsor Congressional District: 07
Primary Place of Performance: Johns Hopkins University
3400 N Charles St
Baltimore
MD  US  21218-2625
Primary Place of Performance
Congressional District:
07
Unique Entity Identifier (UEI): FTMTDMBR29C7
Parent UEI: GS4PNKTRNKL3
NSF Program(s): EFRI Research Projects
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 8091, 9102
Program Element Code(s): 763300
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.041

ABSTRACT

This Emerging Frontiers in Research and Innovation (EFRI) project will close the gap between natural intelligence (NI) and artificial intelligence (AI), by using computational models of the brain to help AI systems make more efficient use of both data and power. Specifically, the project takes inspiration from the ability of mammalian brains to store and process only an appropriately chosen subset of the information conveyed by the visual system. Without this feature, called selective attention or ?saliency,? the brain would soon be overwhelmed by the sheer volume of incoming sensory data. This project will translate neuroscience models of visual attention to new algorithms for learning in deep neural networks. These new algorithms will greatly reduce the number of variables that must be updated while learning new patterns. The benefits of these brain-inspired algorithms will be amplified by implementation on customized computing hardware designed to mimic the form and function of structures from the mammalian brain. The result will enable new AI devices with transformative new capabilities and performance for applications from self-driving cars to medical diagnosis. As revolutionary as existing AI systems are, they fall well short of living organisms in the natural world, such as a young animal learning from its parent how to survive, which requires the recognition of predators and learning of effective evasive actions. Extrapolation of current AI hardware and software predicts that reaching these levels of performance would require prohibitive amounts of energy and training data. Projects such as this one will lead to the next generation of AI, overcoming these anticipated obstacles through new, neuro-inspired, learning strategies. This project will support the AI workforce of the future by educating a diverse cadre of AI trainees, from K-12 to Postdocs, and it will make innovative algorithms, hardware and datasets available to the AI research and development community.

Deep learning has achieved impressive performance in multiple tasks, driven by the capacity for backpropagation to ?assign credit? to a vast array of parameters. Typical networks have immensely complex computational graphs, with many options to assign credit for every computation. This large number of options comes with the benefits of being very flexible in learning, but also with the costs of large energy consumption and the need for very large datasets for learning. A preselection of important (salient) features will cause inductive biases in learning, but such biases, when appropriately conditioned, can be optimally selected; this occurs in biological information processing via evolution or development. For this project, these biases can be inspired by biology or learned and can be instantiated in software and hardware. This goal of this project is creation of a hybrid architecture, where local circuits implement an attentional mechanism that provides a ?gate? or modulation for selecting features for a global learning network with a convolutional architecture. The attentional mechanism dramatically decreases the number of features considered for inference and for learning by including a learned prior of what features are important. The starting point for the research will be existing attentional models that fit biological data, but this will be expanded by allowing a metasearch over the attentional mechanisms. The expectation is that after determining and implementing optimal attentional mechanisms for a set of tasks/input statistics, power requirement for both inference and learning will be substantially reduced, and learning will be enabled based on considerably fewer examples than traditional methods. This project will also provide substantial opportunities to advance training of highly qualified artificial intelligence workers, from a pool of multi-disciplinary trainees at all levels from K-12 to Postdoctoral Fellowships. Furthermore, the results will be made available in the form of databases and published system designs.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Aitken, Kyle and Campagnola, Luke and Garrett, Marina E and Olsen, Shawn R and Mihalas, Stefan "Simple synaptic modulations implement diverse novelty computations" Cell Reports , v.43 , 2024 https://doi.org/10.1016/j.celrep.2024.114188 Citation Details
Akwasi Akwaboah, Ralph Etienne-Cummings "A Current-Mode Implementation of A Nearest Neighbor STDP Synapse" 21st IEEE Interregional NEWCAS Conference (NEWCAS) , 2023 Citation Details
Takeshi Uejima, Elena Mancinelli "The influence of stereopsis on visual saliency in a proto-object based model of selective attention" Vision research , v.212 , 2023 Citation Details
Voina, D. "A biologically inspired architecture with switching units can learn to generalize across backgrounds" Neural networks , v.168 , 2023 Citation Details
Wang, Caixin and Zhang, Jie and Wilson, Matthew A and Etienne-Cummings, Ralph "Pix2HDR - A pixel-wise acquisition and deep learning-based synthesis approach for high-speed HDR videos" IEEE Transactions on Pattern Analysis and Machine Intelligence , 2024 https://doi.org/10.1109/TPAMI.2024.3410140 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page