
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | June 20, 2014 |
Latest Amendment Date: | August 1, 2019 |
Award Number: | 1408673 |
Award Instrument: | Continuing Grant |
Program Manager: |
Tracy Kimbrel
CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | September 1, 2014 |
End Date: | August 31, 2020 (Estimated) |
Total Intended Award Amount: | $366,166.00 |
Total Awarded Amount to Date: | $366,166.00 |
Funds Obligated to Date: |
FY 2015 = $99,175.00 FY 2016 = $84,116.00 FY 2017 = $86,819.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
341 PINE TREE RD ITHACA NY US 14850-2820 (607)255-5014 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
4130 Upson Hall Ithaca NY US 14853-7501 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Algorithmic Foundations |
Primary Program Source: |
01001516DB NSF RESEARCH & RELATED ACTIVIT 01001617DB NSF RESEARCH & RELATED ACTIVIT 01001718DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Mathematical programming is a powerful tool for attacking combinatorial problems. One transforms a discrete task into a related continuous one by casting it as optimization over a convex body. Linear and semi-definite programming (LP and SDP) form important special cases and are central tools in the theory and practice of combinatorial optimization. These approaches have achieved spectacular success in computing approximately optimal solutions for problems where finding exact solutions is computationally intractable.
While there are very strong bounds known on the efficacy of particular families of relaxations, it remains possible that adding a small number of variables or constraints could lead to drastically improved solutions. We propose the development of a theory to unconditionally capture the power of LPs and SDPs without any complexity-theoretic assumptions. Our approach has the potential to show something remarkable: For many well-known problems, the basic LP or SDP is optimal among a very large class of algorithms. More concretely, we suggest a method that could rigorously characterize the power of polynomial-size LPs and SDPs for a variety of combinatorial optimization tasks. This involves deep issues at the intersection of many areas of mathematics and computer science, with the ultimate goal of significantly extending our understanding of efficient computation.
Mathematical programming is of major importance to many fields---this is especially true for computer science and operations research. These methods have also seen dramatically increasing use in the analysis of "big data" from across the scientific spectrum. From a different perspective, LPs and SDPs can be thought of as rich proof systems, and characterizing their power is a basic problem in the theory of proof complexity. Thus the outcomes of the proposed research are of interest to a broad community of scientists, mathematicians, and practitioners.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The original goal of the project was to develop a theory to characterize the power of linear programming and semidefinite programming based algorithms for discrete optimization problems. These kind of algorithms capture and generalize the best-known algorithms for a wide range of problems. In contrast to the more traditional approach for understanding the power of algorithms via reductions, the theory we aim to develop would not rely on complexity-theoretic assumptions. As an example, we studied polynomial time algorithms for estimating the mean of a heavy-tailed multivariate random vector. When estimating means of heavy-tailed random processes via empirical means, the resulting confidence intervals are large, and hence the estimate imprecise. We offer the first polynomial time algorithm to estimate the mean with compact confidence intervals under mild assumptions. Our algorithm is based on a new semidefinite programming relaxation of a high-dimensional median. Previous estimators are only known to be computable via brute-force search procedures requiring time exponential in the dimension.
In the last two years the focus of the project was switched (by approval from NSF) to understand learning strategies that can be effective as a form of learning for players in repeated games. The idea is to develop strategies that work well in adversarial environment. Two different objectives were considered:
(1) the objective of how well the individual player is doing measured by regret: the performance compared to the single best strategy with hindsight
(2) the overall performance in a game setting when all players follow a learning strategy
The main project on learning considers a queuing system where the queues use no-regret learning algorithms to selfishly optimize for their own performance. When players selfishly compete, the outcome can be worse than what is possible by coordinated optimization. The price of anarchy measures the damage to social welfare due to selfish behavior of the participants. In this work, we study this phenomenon in the context of a game modeling queuing systems: routers compete for servers, where packets that do not get service will be resent at future rounds, resulting in a system where the number of packets at each round depends on the success of the routers in the previous rounds. Most modern systems use machine learning algorithms, when optimizing system performance. We study the outcome of a process when each queue optimizes its own performance using machine learning.
The carryover effect caused by packets remaining in this system makes learning in our context result in a highly dependent random process. We analyze this random process, and show with somewhat increased capacity all packets served even with double the packet arrival rate, and queues use no-regret learning algorithms, then the expected number of packets in the queues will remain bounded throughout time, assuming older packets have priority. This paper is the first to study the effect of selfish learning in a queuing system, where the learners compete for resources, but rounds are not all independent: the number of packets to be routed at each round depends on the success of the routers in the previous rounds.
People's opinions are shaped by their interactions with others. The resulting process of opinion dynamics --- the interplay between opinion formation and the network structure of these interactions --- has been of great interest in the political science, sociology, economics, and computer science communities among others. We study the connections between network structure, opinion dynamics, and an adversary's power to artificially induce disagreements. We approach these questions by extending models of opinion formation in the mathematical social sciences to represent scenarios, familiar from recent events, in which external actors have sought to destabilize communities through sophisticated information warfare tactics via fake news and bots. In many instances, the intrinsic goals of these efforts are not necessarily to shift the overall sentiment of the network towards a particular policy, but rather to induce discord. These perturbations will diffuse via opinion dynamics on the underlying network, through mechanisms that have been analyzed and abstracted through work in computer science and the social sciences.
We investigate the properties of such attacks, considering optimal strategies both for the adversary seeking to create disagreement and for the entities tasked with defending the network from attack. By employing techniques from spectral graph theory, we show that for different formulations of these types of objectives, different regimes of the spectral structure of the network will limit the adversary's capacity to sow discord. Via the strong connections between spectral and structural properties of graphs, we are able to qualitatively describe which networks are most vulnerable or resilient against these perturbations. We also consider the algorithmic task of a network defender to mitigate these sorts of adversarial attacks by insulating nodes heterogeneously; we show that, by considering the geometry of this problem, this optimization task can be efficiently solved via convex programming.
Last Modified: 01/13/2021
Modified by: Eva Tardos
Please report errors in award information by writing to: awardsearch@nsf.gov.