Skip to feedback

Award Abstract # 1650596
EAGER: Concurrent Data Structures

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: YALE UNIV
Initial Amendment Date: August 31, 2016
Latest Amendment Date: August 31, 2016
Award Number: 1650596
Award Instrument: Standard Grant
Program Manager: A. Funda Ergun
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2016
End Date: June 30, 2020 (Estimated)
Total Intended Award Amount: $265,044.00
Total Awarded Amount to Date: $265,044.00
Funds Obligated to Date: FY 2016 = $265,044.00
History of Investigator:
  • James Aspnes (Principal Investigator)
    aspnes@cs.yale.edu
Recipient Sponsored Research Office: Yale University
150 MUNSON ST
NEW HAVEN
CT  US  06511-3572
(203)785-4689
Sponsor Congressional District: 03
Primary Place of Performance: Yale University
AKWatson Hall
New Haven
CT  US  06520-8285
Primary Place of Performance
Congressional District:
03
Unique Entity Identifier (UEI): FL6GV84CKN57
Parent UEI: FL6GV84CKN57
NSF Program(s): Algorithmic Foundations
Primary Program Source: 01001617DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7916, 7926, 7934
Program Element Code(s): 779600
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Most computer programming describes a sequence of steps to be carried out one by one by a single processor core. As the rate of increase in processor speed has slowed, CPU manufacturers have responded by building systems with many processor cores that together carry out multiple tasks simultaneously. These processors share a common memory that they use both for their own individual computations and for communication with other processors. Organizing this shared memory so that the processors can make progress without being delayed by other processors requires careful coordination and the design of specialized data structures and communication protocols to allow efficient cooperation without conflicts. The project will study how letting processors make random choices between different ways to accomplish the same task to improve the efficiency and amount of memory used by these data structures, an approach that appears to be necessary given known impossibility results for non-randomized methods. This may significantly improve our ability to exploit the power of multicore machines, while simplifying the work of programmers of these machines. In addition to this impact on computer science and the practice of programming, the project will directly impact both undergraduate and graduate research. Because concurrent data structures are well-suited to undergraduate implementation projects, which avoid difficulties that often arise with involving undergraduates in more theoretical research, the project will serve as a bridge for recruiting students into cutting-edge, high-stakes research, including students from under-represented groups. At the graduate level, results from the project will feed directly into the PI's teaching, including updates to the PI's publicly-available lecture notes already in use by many students at other institutions.

The main question considered by the project is: Can we remove bottlenecks in concurrent executions of programs in shared-memory system using data structures that avoid traditional concurrency control tools like locking? Two techniques that appear promising are the use of randomization, which avoids problems where bad timing consistently produces bad executions in which different processes interfere with each others' operations, and limited-use assumptions, where shared data structures are designed under the assumption that they will only be used for a limited number of operations, allowing for significant reductions in cost and complexity. In addition to applying these techniques individually to the design of concurrent shared-memory data structures, the project will also consider how these methods can complement each other, for example by the use of randomized helping techniques to transform short-lived limited-use data structures into long-lived unlimited-use data structures. A key element of this work will be the development of improved cost measures for shared-memory data structures, including dynamic measures of space complexity that more accurately reflect practical memory usage than the worst-case measures of maximum memory consumption found in previous work.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 14)
Alistarh, Dan and Aspnes, James and Gelashvili, Rati "Space-Optimal Majority in Population Protocols" ACM-SIAM Symposium on Discrete Algorithms , 2018 10.1137/1.9781611975031.144 Citation Details
Afek, Yehuda and Aspnes, James and Cohen, Edo and Vainstein, Danny "Brief Announcement: Object Oriented Consensus" ACM Symposium on Principles of Distributed Computing , 2017 10.1145/3087801.3087867 Citation Details
Alistarh, Dan and Aspnes, James and Eisenstat, David and Gelashvili, Rati and Rivest, Ronald L. "Time-Space Trade-offs in Population Protocols" Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms , 2017 10.1137/1.9781611974782.169 Citation Details
Alistarh, Dan and Aspnes, James and Ellen, Faith and Gelashvili, Rati and Zhu, Leqi "Why extension-based proofs fail" 51st Annual ACM SIGACT Symposium on Theory of Computing , 2019 10.1145/3313276.3316407 Citation Details
Amir, Talley and Aspnes, James and Doty, David and Eftekhari, Mahsa and Severson, Eric E. "Message Complexity of Population Protocols" 34th International Symposium on Distributed Computing, DISC 2020, October 12-16, 2020, Virtual Conference , v.179 , 2020 https://doi.org/10.4230/LIPIcs.DISC.2020.6 Citation Details
Amir, Talley and Aspnes, James and Lazarsfeld, John "Approximate Majority With Catalytic Inputs" OPODIS 2020 , 2020 https://doi.org/ Citation Details
Aspnes, James "Clocked Population Protocols" ACM Symposium on Principles of Distributed Computing , 2017 10.1145/3087801.3087836 Citation Details
Aspnes, James and Beauquier, Joffroy and Burman, Janna and Sohier, Devan "Time and Space Optimal Counting in Population Protocols" Leibniz international proceedings in informatics , v.70 , 2016 10.4230/LIPIcs.OPODIS.2016.13 Citation Details
Aspnes, James and Er, He Yang "Consensus with Max Registers" Leibniz international proceedings in informatics , v.146 , 2019 http://dx.doi.org/10.4230/LIPIcs.DISC.2019.1 Citation Details
Aspnes, James and Haeupler, Bernhard and Tong, Alexander and Woelfel, Phillip "Allocate-On-Use Space Complexity of Shared-Memory Algorithms" International Symposium on Distributed Computing , 2018 Citation Details
Aspnes, James and Haeupler, Bernhard and Tong, Alexander and Woelfl, Philipp "Allocate-On-Use Space Complexity of Shared-Memory Algorithms" International Symposium on Distributed Computing , 2018 10.4230/LIPIcs.DISC.2018.8 Citation Details
(Showing: 1 - 10 of 14)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The primary goal of the project was to study space complexity in distributed algorithms. Space complexity measures the amount of memory needed to carry out some task. Distributed algorithms are methods for coordinating multiple computers across some sort of network or other communication medium.

Previous work on space complexity looked at the worst-case cost, the maximum amount of memory needed to carry out a task in the worst possible execution. The justification for this approach has been the assumption that memory cannot be used without being pre-allocated. This assumption is violated by modern computer systems that allow allocation of memory as needed, or when it is tolerable for an algorithm to fail in the rare cases where extra memory turns out to be needed. The cost of requiring pre-allocation is that many problems in distributed algorithms are known to have very high space costs in the worst case.

The project considered an alternative approach where an algorithm is charged only for the memory it uses in a particular execution. We showed that this allows for much more space-efficient algorithms for fundamental coordination problems like mutual exclusion, where we wish to prevent more than one of several processes from accessing some shared resource at any given time.

Additional work on the project looked at similar issues for other models of distributed computation, including the population protocol model, a representation of loosely-coordinated computation by very limited computational agents. (Examples include computations by chemical reactions, in which the agents are individual molecules in a solution.) Here the concern was showing that computations could be carried out without requiring too much memory on any individual agent. The project produced significant new results on computing the majority function with limited space in this model.

Broader impacts of the project included contributions to educational development, via participation of students in the work of the project.


Last Modified: 01/09/2021
Modified by: James Aspnes

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page