
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | August 31, 2016 |
Latest Amendment Date: | August 31, 2016 |
Award Number: | 1650596 |
Award Instrument: | Standard Grant |
Program Manager: |
A. Funda Ergun
CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | September 1, 2016 |
End Date: | June 30, 2020 (Estimated) |
Total Intended Award Amount: | $265,044.00 |
Total Awarded Amount to Date: | $265,044.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
150 MUNSON ST NEW HAVEN CT US 06511-3572 (203)785-4689 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
AKWatson Hall New Haven CT US 06520-8285 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Algorithmic Foundations |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Most computer programming describes a sequence of steps to be carried out one by one by a single processor core. As the rate of increase in processor speed has slowed, CPU manufacturers have responded by building systems with many processor cores that together carry out multiple tasks simultaneously. These processors share a common memory that they use both for their own individual computations and for communication with other processors. Organizing this shared memory so that the processors can make progress without being delayed by other processors requires careful coordination and the design of specialized data structures and communication protocols to allow efficient cooperation without conflicts. The project will study how letting processors make random choices between different ways to accomplish the same task to improve the efficiency and amount of memory used by these data structures, an approach that appears to be necessary given known impossibility results for non-randomized methods. This may significantly improve our ability to exploit the power of multicore machines, while simplifying the work of programmers of these machines. In addition to this impact on computer science and the practice of programming, the project will directly impact both undergraduate and graduate research. Because concurrent data structures are well-suited to undergraduate implementation projects, which avoid difficulties that often arise with involving undergraduates in more theoretical research, the project will serve as a bridge for recruiting students into cutting-edge, high-stakes research, including students from under-represented groups. At the graduate level, results from the project will feed directly into the PI's teaching, including updates to the PI's publicly-available lecture notes already in use by many students at other institutions.
The main question considered by the project is: Can we remove bottlenecks in concurrent executions of programs in shared-memory system using data structures that avoid traditional concurrency control tools like locking? Two techniques that appear promising are the use of randomization, which avoids problems where bad timing consistently produces bad executions in which different processes interfere with each others' operations, and limited-use assumptions, where shared data structures are designed under the assumption that they will only be used for a limited number of operations, allowing for significant reductions in cost and complexity. In addition to applying these techniques individually to the design of concurrent shared-memory data structures, the project will also consider how these methods can complement each other, for example by the use of randomized helping techniques to transform short-lived limited-use data structures into long-lived unlimited-use data structures. A key element of this work will be the development of improved cost measures for shared-memory data structures, including dynamic measures of space complexity that more accurately reflect practical memory usage than the worst-case measures of maximum memory consumption found in previous work.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The primary goal of the project was to study space complexity in distributed algorithms. Space complexity measures the amount of memory needed to carry out some task. Distributed algorithms are methods for coordinating multiple computers across some sort of network or other communication medium.
Previous work on space complexity looked at the worst-case cost, the maximum amount of memory needed to carry out a task in the worst possible execution. The justification for this approach has been the assumption that memory cannot be used without being pre-allocated. This assumption is violated by modern computer systems that allow allocation of memory as needed, or when it is tolerable for an algorithm to fail in the rare cases where extra memory turns out to be needed. The cost of requiring pre-allocation is that many problems in distributed algorithms are known to have very high space costs in the worst case.
The project considered an alternative approach where an algorithm is charged only for the memory it uses in a particular execution. We showed that this allows for much more space-efficient algorithms for fundamental coordination problems like mutual exclusion, where we wish to prevent more than one of several processes from accessing some shared resource at any given time.
Additional work on the project looked at similar issues for other models of distributed computation, including the population protocol model, a representation of loosely-coordinated computation by very limited computational agents. (Examples include computations by chemical reactions, in which the agents are individual molecules in a solution.) Here the concern was showing that computations could be carried out without requiring too much memory on any individual agent. The project produced significant new results on computing the majority function with limited space in this model.
Broader impacts of the project included contributions to educational development, via participation of students in the work of the project.
Last Modified: 01/09/2021
Modified by: James Aspnes
Please report errors in award information by writing to: awardsearch@nsf.gov.