Award Abstract # 1938400
SBIR Phase I: Neural Component Architecture to Accelerate Modeling & Simulation

NSF Org: TI
Translational Impacts
Recipient: JULIA COMPUTING, INC.
Initial Amendment Date: January 21, 2020
Latest Amendment Date: January 21, 2020
Award Number: 1938400
Award Instrument: Standard Grant
Program Manager: Peter Atherton
patherto@nsf.gov
 (703)292-8772
TI
 Translational Impacts
TIP
 Directorate for Technology, Innovation, and Partnerships
Start Date: February 1, 2020
End Date: January 31, 2021 (Estimated)
Total Intended Award Amount: $224,676.00
Total Awarded Amount to Date: $224,676.00
Funds Obligated to Date: FY 2020 = $224,676.00
History of Investigator:
  • Keno Fischer (Principal Investigator)
    grants.keno@juliacomputing.com
Recipient Sponsored Research Office: Julia Computing Inc
20 GARLAND RD
NEWTON CENTER
MA  US  02459-1709
(617)201-7055
Sponsor Congressional District: 04
Primary Place of Performance: Julia Computing, Inc.
20 Garland Rd
Newton
MA  US  02459-1709
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): CUXTK87LQV49
Parent UEI:
NSF Program(s): SBIR Phase I
Primary Program Source: 01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 8032
Program Element Code(s): 537100
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.084

ABSTRACT

The broader impact of this Small Business Innovation Research (SBIR) Phase I project will result from enabling significant cost and time savings in developing new, more efficient designs in broad fields such as engineering and healthcare. If successful, the project will enable simulations of everything from automobiles to aerospace components and pharmaceuticals to run up to 100 times faster by representing a physical component of a system with an advanced digital analogue. To date, software incompatibilities have limited the development of this kind of modeling. This project will solve this problem through advanced computational and compiler techniques, and thereby demonstrate the feasibility of a new kind of design process with significant cost reductions.

This Small Business Innovation Research Phase I project will demonstrate the feasibility of using neural components in a modular system. We will combine the successes of surrogate model optimization and neural ODEs to allow for component-based differential-algebraic equation models with automated model order reduction through a latent diffeq. The idea is to build complex models as an assembly of modular pre-designed simulation components using our recent advances in differential programming and learning software to allow for automated training of neural model order reduction for accelerating the solution of large acausal models. Two machine learning methods have promising prospects for accelerating traditional mechanistic modeling workflows: surrogate optimization and neural differential equations. In this project, we will integrate these components into a prototype system.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Modeling and Simulation are key to modern engineering and bio-engineering. Increasingly complex models are built and simulated by engineers - but today’s modeling software falls short of meeting the bioengineer’s expectations - whether it is synthesizing synthetic biocircuits or biofuel pathways, analyzing drug targets via quantitative systems pharmacology, etc. We are leveraging differentiable programming to enhance the productivity of bioengineers in their quest towards next generation technologies.

Through the combination of a new class of differential equation based machine learning algorithms with GPUs (the Julia SciML stack), we have advanced the capabilities of Modelica-like acausal modeling environments and provide a >100x performance improvement. This is made possible by spending extreme amounts of training time to build ML surrogates of physical models. These surrogates are built in a way that they can be directly shipped to scientists pre-trained, meaning the 100x performance improvement is a direct improvement to the speed of drug design, cardiac pathology identification, biofuel pathways analysis, and more.

In what we consider as a shining achievement of our efforts, engineers at NASA Launch Services were able to achieve a 15,000x improvement in performance over their existing tools for simulation of space payload.

The MIT undergraduate course 18.S191 and graduate course 18.337 demonstrate a large portion of the SciML tools built and enhanced as part of this project. These courses have had a notably wide reach, with one of the lectures reaching more than 250,000 views on Youtube. Many trainings and workshops have already been given on these tools at large conferences, with the “Doing Scientific Machine Learning” 3 hour workshop at JuliaCon 2020 reaching over 20,000 views. Discussions of the software and methods behind this work have been featured at the top of tech news aggregation sites like Hacker News, such as the release of the Symbolics.jl and ModelingToolkit.jl symbolic modeling languages.


Last Modified: 03/25/2021
Modified by: Keno Fischer

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page