Award Abstract # 2328857
Collaborative Research: FuSe: Metaoptics-Enhanced Vertical Integration for Versatile In-Sensor Machine Vision

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: UNIVERSITY OF ILLINOIS
Initial Amendment Date: September 13, 2023
Latest Amendment Date: November 5, 2024
Award Number: 2328857
Award Instrument: Continuing Grant
Program Manager: Sankar Basu
sabasu@nsf.gov
 (703)292-7843
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2023
End Date: September 30, 2026 (Estimated)
Total Intended Award Amount: $400,000.00
Total Awarded Amount to Date: $400,000.00
Funds Obligated to Date: FY 2023 = $133,000.00
FY 2024 = $267,000.00
History of Investigator:
  • Viktor Gruev (Principal Investigator)
    vgruev@illinois.edu
Recipient Sponsored Research Office: University of Illinois at Urbana-Champaign
506 S WRIGHT ST
URBANA
IL  US  61801-3620
(217)333-2187
Sponsor Congressional District: 13
Primary Place of Performance: University of Illinois at Urbana-Champaign
506 S WRIGHT ST
URBANA
IL  US  61801-3620
Primary Place of Performance
Congressional District:
13
Unique Entity Identifier (UEI): Y8CWNJRCNN91
Parent UEI: V2PHZ2CSCH63
NSF Program(s): FuSe-Future of Semiconductors
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01002425DB NSF RESEARCH & RELATED ACTIVIT

01002526DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7945
Program Element Code(s): 216Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Vision is perhaps the most important human perception, as the majority of the brain?s cognitive function is dedicated to processing visual information. Despite recent advancements, today?s vision sensors remain quite primitive when compared to the superior ability of human visual perception. Moreover, the rapid development of deep learning and artificial intelligence (AI) has unleashed a new wave of machine vision, where increasing amounts of image data are generated and consumed, not by humans, but by edge devices to perform intelligent tasks such as classification, recognition, and perception. Inspired by the biological system and motivated by the huge demand of machine vision, this project investigates an integrated and holistic approach to building versatile vision systems that can be tailored for domain-specific tasks. It aims to create a vertically-integrated design stack for vision sensors across optics, image sensors, and vision processors. The project is expected to herald a new paradigm of AI-driven vision systems and demonstrate technology to address pivotal engineering challenges from real-time visual adaptivity in self-driving cars to near-zero energy efficiency in persistent environmental monitoring. In addition, the project?s education and workforce development activities foster an open-source hardware community to boost accessibility and deepen collaboration beyond the traditional discipline divides, as well as to build up the capacity of domestic talents in vision sensor industry, critical to national security and supply chain safety.

The research objective of this project is to create the scientific and engineering foundations for a novel machine vision system that explores the hybrid integration of nanophotonic metamaterials and complementary metal-oxide semiconductor (CMOS) circuits and synergistically leverages the intrinsic computing capability of computational metasurface and analog-domain encoder embedded inside the image sensors. Our principled approach to abstracting design knobs and modeling interactions and tradeoffs across the system layers and physical domains will inform future ?More than Moore? multi-physics semiconductor device integration. We will delve into the key concept of optimally distributing computation along the processing pipeline with complementary intrinsic physical-domain operations. Our end-to-end design framework is deliberately created to bridge the divide between modeling/simulation infrastructure and design toolchains across multiple heterogeneous physical domains. The core principles of embedding machine-learning-enabled feature selection with optical/electrical vertical integration could have a major impact on the design of sensor-rich intelligent physical platforms where resource constraints coincide with strict latency requirements. The technology developed in this project will turbocharge AI-enabled hardware to satisfy the tremendous computational demand imposed by data proliferation, broadly benefiting a range of burgeoning industries such as machine vision as a service, smart IoT infrastructure, data-driven sensing and imaging.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page