
NSF Org: |
OSI Office of Strategic Initiatives (OSI) |
Recipient: |
|
Initial Amendment Date: | August 5, 2019 |
Latest Amendment Date: | August 5, 2019 |
Award Number: | 1936314 |
Award Instrument: | Standard Grant |
Program Manager: |
Tingyu Li
tli@nsf.gov (703)292-4949 OSI Office of Strategic Initiatives (OSI) MPS Directorate for Mathematical and Physical Sciences |
Start Date: | September 1, 2019 |
End Date: | August 31, 2024 (Estimated) |
Total Intended Award Amount: | $2,000,000.00 |
Total Awarded Amount to Date: | $2,000,000.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
3112 LEE BUILDING COLLEGE PARK MD US 20742-5100 (301)405-6269 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
MD US 20742-3511 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
QISET-Quan Info Sci Eng & Tech, OFFICE OF MULTIDISCIPLINARY AC, EPMD-ElectrnPhoton&MagnDevices |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.049 |
ABSTRACT
Deep learning is revolutionizing computing for an ever-increasing range of applications, from natural language processing to particle physics to cancer diagnosis. These advances have been made possible by a combination of algorithmic design and dedicated hardware development. Quantum computing, while more nascent, is experiencing a similar trajectory, with a rapidly closing gap between current hardware and the scale required for practical implementation of quantum algorithms. But we are still extremely far away from a full-scale quantum computer that can implement gate-based computer architectures. Such architectures require quantum error correction to make the system robust against noise, which remains outside the reach of existing quantum technology. This project aims to develop a new approach to quantum computation by adopting concepts from the field of machine learning. In contrast to conventional approaches where computation is decomposed into logical gates, the investigators will focus on quantum computing architectures inspired by machine learning and deep learning to implement quantum protocols that are naturally efficient and robust to noise. These architectures are ideally suited to maximize the computational capabilities of currently available noisy quantum processors because machine learning algorithms can be trained using efficient methods such as back-propagation. The project represents a highly multi-disciplinary effort that combines quantum hardware development with algorithms and computer architecture design to create quantum protocols and devices that can be leveraged for near-term application in quantum simulation, machine learning, optimization, and quantum communication. Success of the project could open a completely new approach to quantum computing that enables currently available quantum hardware to efficiently solve problems in a broad range of fields such as medicine, biology, nuclear physics, and fundamental quantum science. The program also entails a strong outreach effort that integrates education at the high school, undergraduate, and graduate levels with public education through a series of YouTube educational modules.
Integrated quantum photonics enables dynamic, high-fidelity generation and manipulation of quantum states of light, and is therefore a natural platform with which to develop chip-based quantum machine learning architectures. Leveraging both the versatility of neural networks and the computational complexity of quantum optics, the program develops chip-based deep quantum optical neural networks for applications in quantum computation, simulation, communication, machine learning, and beyond. Taking inspiration from the burgeoning field of neural networks, this hardware platform combines semiconductor quantum light sources (input encoding) with dynamically reconfigurable linear optical circuitry (matrix multiplication) and strong single photon nonlinearities (the quantum neuron), to develop a new paradigm for next generation quantum processors. In parallel, the theory effort will develop a robust numerical platform to simulate quantum machine learning protocols based on the hardware platform and design new protocols for multiple applications including image and pattern recognition, optimization, and quantum communication. The strong collaborative interactions between hardware and theory will thus be leveraged to develop an entirely new arsenal of protocols that exploit the unique physical properties of photons.
This project is jointly funded by Quantum Leap Big Idea Program and the Division of Electrical, Communications, and Cyber Systems in the Directorate for Engineering.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Deep learning is revolutionizing computing for an ever-increasing range of applications, from natural language processing to particle physics to cancer diagnosis. These advances are enabled by both algorithmic design and dedicated hardware development. Quantum computing, though more nascent, is experiencing a similar trajectory, with the gap between current hardware and the scale required for practical quantum algorithms rapidly closing. However, we remain far from a full-scale quantum computer that can implement gate-based architectures, which require quantum error correction to achieve robustness against noise.
This program explored a new approach to quantum computation by integrating concepts from machine learning and quantum photonics. Rather than decomposing computation into logical gates, we focused on quantum computing architectures inspired by machine learning and deep learning, implementing naturally efficient and noise-robust quantum protocols. These architectures are ideally suited to maximize the computational capabilities of currently available noisy quantum processors because machine learning algorithms can be trained using efficient methods such as back-propagation.
By leveraging both the versatility of neural networks and the computational complexity of quantum optics, the program investigated chip-based Deep Quantum Optical Neural Networks for applications in quantum computation, simulation, communication, machine learning, and beyond. Inspired by the burgeoning field of neural networks, this hardware platform combines semiconductor quantum light sources (for input encoding) with dynamically reconfigurable linear optical circuitry (for matrix multiplication) and strong single-photon nonlinearities (the “quantum neuron”)—constituting a new paradigm for next-generation quantum processors.
Over the course of this program, we made substantial theoretical and experimental progress toward realizing quantum optical neural networks. Theoretically, we proposed a deep quantum optical neural network design based on robust, cascadeable nonlinearities to enable complex computational tasks. In particular, by exploiting a three-level atomic system, we showed how photons can interact while preserving their temporal waveforms, ensuring that each nonlinear element can drive subsequent layers. We also developed novel training methods leveraging these nonlinearities, demonstrating key functionalities such as state mapping, state preparation, and fully error-corrected quantum logic.
Experimentally, we achieved major milestones in integrating large numbers of single quantum dot devices on a reconfigurable silicon photonic chip capable of implementing arbitrary linear optical unitaries. We demonstrated on-chip integration of quantum dots and their electrostatic tuning. These results set the stage for prototype devices containing multiple quantum dots on a single silicon photonic circuit, moving us closer to a fully functional quantum optical neural network.
Last Modified: 01/15/2025
Modified by: Edo Waks
Please report errors in award information by writing to: awardsearch@nsf.gov.