
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | June 22, 2017 |
Latest Amendment Date: | June 22, 2017 |
Award Number: | 1718342 |
Award Instrument: | Standard Grant |
Program Manager: |
A. Funda Ergun
CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | July 1, 2017 |
End Date: | June 30, 2020 (Estimated) |
Total Intended Award Amount: | $497,504.00 |
Total Awarded Amount to Date: | $497,504.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1 SILBER WAY BOSTON MA US 02215-1703 (617)353-4365 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
MA US 02215-1300 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Algorithmic Foundations |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Efficient algorithms are at the heart of any modern computing systems. In the classical study of algorithms, the notion of efficiency has long been taken to equal polynomial running time in the size of the input. However, the recent rise of massive datasets, for which even quadratic running times may be practically infeasible, has led to a rethinking of this assumption. This effort has led to a number of breakthroughs on fundamental problems, such as computing the maximum flow through a network, which can now be solved in essentially linear time in many cases. These advances are largely based on novel algorithmic tools stemming from convex and numerical optimization, which heavily rely on continuous mathematics. Based on this paradigm, the PIs aim to provide improved algorithms for two classes of problems that are important both in theory and practice: maximum flow problems and submodular optimization problems. These problems are central to computer science and have wide applicability to many real-life problems. The project will support and train two Ph.D. students and a postdoc in algorithm design and optimization at Boston University. The proposed research requires a useful exchange of ideas between theoretical computer science and continuous optimization, strengthening existing ties as well as forging new connections between the two areas. The continuous viewpoint provides a new perspective on algorithm design that the PIs plan to disseminate broadly and to incorporate into their optimization courses at Boston University.
In this project, the PIs will delve deeply into the connection between efficient discrete optimization and continuous mathematics by considering the following continuous approach to algorithm design: algorithms are initially conceived as continuous trajectories through the space of solutions to the problems, e.g., as described by a set of differential equations; these trajectories are discretized to yield true discrete-time algorithms that are provably fast. We plan to exploit the new understanding of algorithms obtained through this framework to provide improved algorithms for two classes of problems that are important both in theory and practice: maximum flow problems and submodular optimization problems. The main technical focus of the project will be understanding and leveraging the idea of "acceleration", which plays a central role in convex optimization, as it yields optimal gradient-descent algorithms for a large class of functions. A very recent line of work initiated the study of accelerated methods from a continuous-time perspective using ordinary differential equations (ODEs) and classical discretization tools. More precisely, these works aim to represent a class of accelerated methods via ODEs whose dynamics describe the continuous-time limits of the discrete algorithms, while the discrete algorithms can be interpreted as appropriate discretizations of the continuous-time curves described by the ODEs.
The project will build on this recently established viewpoint to make progress on central combinatorial optimization problems involving graphs and submodular functions. In particular, the project aims to exploit the rich combinatorial structure present in these problems to construct improved discretization procedures for known dynamical systems corresponding to accelerated algorithms. Specific problems of interest include: (a) Maximum s-t flows and connectivity problems in graphs and networks - The emphasis will be on leveraging convex optimization techniques such as acceleration in order to obtain faster approximate solutions for these problems in undirected and directed graphs. (b) Constrained submodular maximization problems - A particular area of focus will be on designing new continuous algorithms and discretization for solving known continuous relaxations of submodular objectives under structured constraints, such as packing and covering constraints.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The project led to the development of novel algorithmic frameworks for discrete and continuous optimization. On the continuous optimization front, the novel techniques and associated algorithms developed as part of this award have significantly advanced the understanding of accelerated methods, an important class of first-order methods that provide optimal convergence guarantees for smooth convex optimization problems, i.e., problems involving the mininimization of smooth (continuously differentiable) convex functions. Many practically important problems, such as regression problems, belong to this class. Moreover, techniques inspired by accelerated methods are of great importance beyond the convex case, as they are often applied to speed up the solution of non-convex problems in machine learning applications, such as deep learning. Specifically, the novel approximate-duality-gap framework gives a simple, unified explanation of accelerated methods and standard first-order methods. It shows that all such methods are based on iteratively constructing a primal-dual pair of solutions, i.e., a primal solution to the original problem and a dual solution certifying a lower bound on the value of optimum. All methods can be methodically derived by specifying the form of such primal-dual pair, which is an immediate consequence of the problem assumptions, and by driving the gap between values of primal and dual solutions to 0. The benefits of this new insights are already evident from related results funded by this award, including novel algorithms for acceleration in the presence of noise and improved methods for resource allocation problems. Beyond the time frame of this award, the approximate-duality-gap framework provides a systematic foundation to connect algorithm design within theoretical computer science to the study of natural continuous-time dynamical systems and to the calculus of variations.
On the discrete optimization front, the project led to the development of very efficient sequential and parallel algorithms for maximizing submodular functions, a broad class of discrete functions that capture the key diminishing returns property of utility functions in Economics, joint entropy functions in information theory, clustering and other objective functions in information retrieval and machine learning. The parallel algorithms developed achieve exponential speedups in parallel running time over existing methods. The algorithms smoothly interpolate between the linear and more general submodular objectives. In practice, where the objective does not exhibit worst-case submodular behavior, this interpolation leads to further improved performance. The sequential algorithms make progress toward achieving very fast and practical running times that scale nearly-linearly with the size of the input data. The project also tackled the problem of finding a maximum flow in a graph, a fundamental graph optimization problem that is a building block in many applications. The project led to the development of a faster algorithm for finding a minimum cost flow in a graph. The aforementioned results combine the combinatorial structure with techniques from the field of continuous optimization, such as continuous relaxations, gradient descent primitives, and differential equations. The connections between discrete and continuous optimization established have the potential to lead to further progress in theoretical computer science, machine learning, and beyond.
Last Modified: 12/29/2020
Modified by: Alina Ene
Please report errors in award information by writing to: awardsearch@nsf.gov.