Award Abstract # 2045900
CAREER: Automatic Variational Inference

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF MASSACHUSETTS
Initial Amendment Date: March 9, 2021
Latest Amendment Date: August 28, 2024
Award Number: 2045900
Award Instrument: Continuing Grant
Program Manager: Vladimir Pavlovic
vpavlovi@nsf.gov
 (703)292-8318
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2021
End Date: August 31, 2026 (Estimated)
Total Intended Award Amount: $550,800.00
Total Awarded Amount to Date: $432,051.00
Funds Obligated to Date: FY 2021 = $102,904.00
FY 2022 = $104,102.00

FY 2023 = $110,238.00

FY 2024 = $114,807.00
History of Investigator:
  • Justin Domke (Principal Investigator)
    jdomke@umass.edu
Recipient Sponsored Research Office: University of Massachusetts Amherst
101 COMMONWEALTH AVE
AMHERST
MA  US  01003-9252
(413)545-0698
Sponsor Congressional District: 02
Primary Place of Performance: University of Massachusetts Amherst
Computer Science Building, Room
Amherst
MA  US  01003-9264
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): VGJHK59NMPK9
Parent UEI: VGJHK59NMPK9
NSF Program(s): Robust Intelligence
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
01002223DB NSF RESEARCH & RELATED ACTIVIT

01002324DB NSF RESEARCH & RELATED ACTIVIT

01002425DB NSF RESEARCH & RELATED ACTIVIT

01002526DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 7495
Program Element Code(s): 749500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Hundreds of thousands of people across science, government, and business use automatic probabilistic inference tools. In these tools, people carefully state their assumptions, and then combine them with observed data. However, these automatic tools only work at a relatively modest scale, and ever-growing datasets require more powerful methods. Recent years have seen the development of a novel strategy for inference that has been able to address data orders of magnitude larger. However, great care and expertise is needed to wield these methods successfully, putting them out of reach for most potential users. This project seeks to promote the progress of science by making these large-scale techniques more automatic, putting them within reach of the vast majority of users not able to invest huge amounts of effort in manual algorithmic engineering.

This project advances methodology for automatic and general-purpose variational inference, with the goal of answering two questions. The first question is when does variational inference work. This is paramount, since no method can succeed on all problems. We take three directions, namely new diagnostic error measures, improved scalability for diagnostics, and an empirical evaluation on a corpus of real non-expert models gathered from an integrated course. The second question is how to automate algorithmic design choices. Variational inference algorithms require many delicate design choices, currently made manually. The core idea to automate these decisions is to maintain statistics so the effect of any set of choices on optimization speed can be predicted. This project will contribute 1) a corpus and evaluation of automatic inference on non-expert models, 2) improved diagnostic performance measures, and 3) methods to automatically make variational inference choices, guided by convergence rates.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Burroni, Javier and Domke, Justin and Sheldon, Daniel "Sample Average Approximation for Black-Box Variational Inference" , 2024 Citation Details
Wang, Xi and Geffner, Tomas and Domke, Justin "Joint control variate for faster black-box variational inference" , 2024 Citation Details
Yao, Yuling and Domke, Justin "Discriminative Calibration: Check Bayesian Computation from Simulations and Flexible Classifier" , 2023 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page