Award Abstract # 1718342
AF:Small: Continuous Perspectives on Accelerated Methods for Combinatorial Optimization

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: TRUSTEES OF BOSTON UNIVERSITY
Initial Amendment Date: June 22, 2017
Latest Amendment Date: June 22, 2017
Award Number: 1718342
Award Instrument: Standard Grant
Program Manager: A. Funda Ergun
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: July 1, 2017
End Date: June 30, 2020 (Estimated)
Total Intended Award Amount: $497,504.00
Total Awarded Amount to Date: $497,504.00
Funds Obligated to Date: FY 2017 = $497,504.00
History of Investigator:
  • Lorenzo Orecchia (Principal Investigator)
    orecchia@uchicago.edu
  • Alina Ene (Co-Principal Investigator)
Recipient Sponsored Research Office: Trustees of Boston University
1 SILBER WAY
BOSTON
MA  US  02215-1703
(617)353-4365
Sponsor Congressional District: 07
Primary Place of Performance: Trustees of Boston University
MA  US  02215-1300
Primary Place of Performance
Congressional District:
07
Unique Entity Identifier (UEI): THL6A6JLE1S7
Parent UEI:
NSF Program(s): Algorithmic Foundations
Primary Program Source: 01001718DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 7926
Program Element Code(s): 779600
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT


Efficient algorithms are at the heart of any modern computing systems. In the classical study of algorithms, the notion of efficiency has long been taken to equal polynomial running time in the size of the input. However, the recent rise of massive datasets, for which even quadratic running times may be practically infeasible, has led to a rethinking of this assumption. This effort has led to a number of breakthroughs on fundamental problems, such as computing the maximum flow through a network, which can now be solved in essentially linear time in many cases. These advances are largely based on novel algorithmic tools stemming from convex and numerical optimization, which heavily rely on continuous mathematics. Based on this paradigm, the PIs aim to provide improved algorithms for two classes of problems that are important both in theory and practice: maximum flow problems and submodular optimization problems. These problems are central to computer science and have wide applicability to many real-life problems. The project will support and train two Ph.D. students and a postdoc in algorithm design and optimization at Boston University. The proposed research requires a useful exchange of ideas between theoretical computer science and continuous optimization, strengthening existing ties as well as forging new connections between the two areas. The continuous viewpoint provides a new perspective on algorithm design that the PIs plan to disseminate broadly and to incorporate into their optimization courses at Boston University.


In this project, the PIs will delve deeply into the connection between efficient discrete optimization and continuous mathematics by considering the following continuous approach to algorithm design: algorithms are initially conceived as continuous trajectories through the space of solutions to the problems, e.g., as described by a set of differential equations; these trajectories are discretized to yield true discrete-time algorithms that are provably fast. We plan to exploit the new understanding of algorithms obtained through this framework to provide improved algorithms for two classes of problems that are important both in theory and practice: maximum flow problems and submodular optimization problems. The main technical focus of the project will be understanding and leveraging the idea of "acceleration", which plays a central role in convex optimization, as it yields optimal gradient-descent algorithms for a large class of functions. A very recent line of work initiated the study of accelerated methods from a continuous-time perspective using ordinary differential equations (ODEs) and classical discretization tools. More precisely, these works aim to represent a class of accelerated methods via ODEs whose dynamics describe the continuous-time limits of the discrete algorithms, while the discrete algorithms can be interpreted as appropriate discretizations of the continuous-time curves described by the ODEs.

The project will build on this recently established viewpoint to make progress on central combinatorial optimization problems involving graphs and submodular functions. In particular, the project aims to exploit the rich combinatorial structure present in these problems to construct improved discretization procedures for known dynamical systems corresponding to accelerated algorithms. Specific problems of interest include: (a) Maximum s-t flows and connectivity problems in graphs and networks - The emphasis will be on leveraging convex optimization techniques such as acceleration in order to obtain faster approximate solutions for these problems in undirected and directed graphs. (b) Constrained submodular maximization problems - A particular area of focus will be on designing new continuous algorithms and discretization for solving known continuous relaxations of submodular objectives under structured constraints, such as packing and covering constraints.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 17)
Allen-Zhu, Zeyuan and Orecchia, Lorenzo "Nearly linear-time packing and covering LP solvers: Achieving width-independence and -convergence" Mathematical Programming , 2018 10.1007/s10107-018-1244-x Citation Details
Axiotis, Kyriakos and Madry, Aleksander and Vladu, Adrian "Circulation Control for Faster Minimum Cost Flow in Unit-Capacity Graphs" IEEE Symposium on Foundations of Computer Science , 2020 https://doi.org/ Citation Details
Axiotis, Kyriakos and Madry, Aleksander and Vladu, Adrian "Circulation Control for Faster Minimum Cost Flow in Unit-Capacity Graphs" IEEE Symposium on Foundations of Computer Science , 2020 https://doi.org/ Citation Details
Cohen, Michael and Diakonikolas, Jelena and Orecchia, Lorenzo "On Acceleration with Noise-Corrupted Gradients" Proceedings of the 35th International Conference on Machine Learning, , 2018 Citation Details
Diakonikolas, Jelena and Fazel, Maryam and Orecchia, Lorenzo "Fair Packing and Covering on a Relative Scale" SIAM Journal on Optimization , v.30 , 2020 https://doi.org/10.1137/19M1288516 Citation Details
Diakonikolas, Jelena and Orecchia, Lorenzo "Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method" 9th Innovations in Theoretical Computer Science Conference (ITCS 2018) , v.94 , 2018 10.4230/LIPIcs.ITCS.2018.23 Citation Details
Diakonikolas, Jelena and Orecchia, Lorenzo "Alternating Randomized Block Coordinate Descent" Proceedings of the 35th International Conference on Machine Learning , 2018 Citation Details
Diakonikolas, Jelena and Orecchia, Lorenzo "The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods" SIAM Journal on Optimization , v.29 , 2019 https://doi.org/10.1137/18M1172314 Citation Details
Ene, Alina and Nguyen, Huy L "Parallel Algorithm for Non-Monotone DR-Submodular Maximization" International Conference on Machine Learning , 2020 https://doi.org/ Citation Details
Ene, Alina and Nguyen, Huy L "Parallel Algorithm for Non-Monotone DR-Submodular Maximization" International Conference on Machine Learning , 2020 https://doi.org/ Citation Details
Ene, Alina and Nguyen, Huy L. "A Nearly-linear Time Algorithm for Submodular Maximization with a Knapsack Constraint" International Colloquium on Automata, Languages, and Programming , v.132 , 2019 Citation Details
(Showing: 1 - 10 of 17)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The project led to the development of novel algorithmic frameworks for discrete and continuous optimization. On the continuous optimization front, the novel techniques and associated algorithms developed as part of this award have significantly advanced the understanding of accelerated methods, an important class of first-order methods that provide optimal convergence guarantees for smooth convex optimization problems, i.e., problems involving the mininimization of smooth (continuously differentiable) convex functions. Many practically important problems, such as regression problems, belong to this class. Moreover, techniques inspired by accelerated methods are of great importance beyond the convex case, as they are often applied to speed up the solution of non-convex problems in machine learning applications, such as deep learning. Specifically, the novel approximate-duality-gap framework gives a simple, unified explanation of accelerated methods and standard first-order methods. It shows that all such methods are based on iteratively constructing a primal-dual pair of solutions, i.e., a primal solution to the original problem and a dual solution certifying a lower bound on the value of optimum. All methods can be methodically derived by specifying the form of such primal-dual pair, which is an immediate consequence of the problem assumptions, and by driving the gap between values of primal and dual solutions to 0. The benefits of this new insights are already evident from related results funded by this award, including novel algorithms for acceleration in the presence of noise and improved methods for resource allocation problems. Beyond the time frame of this award, the approximate-duality-gap framework provides a systematic foundation to connect algorithm design within theoretical computer science to the study of natural continuous-time dynamical systems and to the calculus of variations.

On the discrete optimization front, the project led to the development of very efficient sequential and parallel algorithms for maximizing submodular functions, a broad class of discrete functions that capture the key diminishing returns property of utility functions in Economics, joint entropy functions in information theory, clustering and other objective functions in information retrieval and machine learning. The parallel algorithms developed achieve exponential speedups in parallel running time over existing methods. The algorithms smoothly interpolate between the linear and more general submodular objectives. In practice, where the objective does not exhibit worst-case submodular behavior, this interpolation leads to further improved performance. The sequential algorithms make progress toward achieving very fast and practical running times that scale nearly-linearly with the size of the input data. The project also tackled the problem of finding a maximum flow in a graph, a fundamental graph optimization problem that is a building block in many applications. The project led to the development of a faster algorithm for finding a minimum cost flow in a graph. The aforementioned results combine the combinatorial structure with techniques from the field of continuous optimization, such as continuous relaxations, gradient descent primitives, and differential equations. The connections between discrete and continuous optimization established have the potential to lead to further progress in theoretical computer science, machine learning, and beyond.


Last Modified: 12/29/2020
Modified by: Alina Ene

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page