
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | February 2, 2015 |
Latest Amendment Date: | May 10, 2019 |
Award Number: | 1453508 |
Award Instrument: | Continuing Grant |
Program Manager: |
Anindya Banerjee
abanerje@nsf.gov (703)292-7885 CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | February 15, 2015 |
End Date: | January 31, 2020 (Estimated) |
Total Intended Award Amount: | $535,043.00 |
Total Awarded Amount to Date: | $535,043.00 |
Funds Obligated to Date: |
FY 2017 = $243,624.00 FY 2019 = $92,362.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
107 S INDIANA AVE BLOOMINGTON IN US 47405-7000 (317)278-3473 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
150 S Woodlawn Bloomington IN US 47405-7104 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Software & Hardware Foundation |
Primary Program Source: |
01001718DB NSF RESEARCH & RELATED ACTIVIT 01001920DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Title: CAREER: Towards Practical Deterministic Parallel Languages
Parallel, multicore processors have become ubiquitous, but parallel programming has not. This gap implies that many everyday programs do not fully use the hardware on which they run. The problem persists because traditional parallel programming approaches are high-risk: a parallel program can yield inconsistent answers, or even crash, due to unpredictable interactions between simultaneous tasks. Certain classes of programs, however, admit strong mathematical guarantees that they will behave the same in spite of parallel execution. That is, they enable deterministic parallel programming. Functional programming, extended with "LVars" --shared-state data structures that support commutating operations-- is one such model. While this theoretical model has been proven deterministic, significant questions remain regarding practical aspects such as efficiency and scalability. This research addresses those questions by developing new LVar data structures and scaling them to larger distributed memory machines. The intellectual merits are in the development of novel algorithms that support parallel programming. Further, the LVar model provides a new lens through which to view problems in parallel programming, which can lead to downstream discoveries. The project's broader significance and importance are (1) its potential to lower the cost and risk of parallel programming and (2) its educational goal: to employ deterministic parallel programming in the introductory programming course at both a university level, and in K-12 education. Changing how programming is taught may be necessary for leveraging hardware parallelism to become a normal and unexceptional part of writing software.
Three specific technical challenges are addressed in this research. First, LVars traditionally require more storage space over time, because "delete" operations do not commute with others. Semantically, the state-space of each LVar forms a join semi-lattice and all modifications must move the state "upwards" monotonically. Nevertheless, this project investigates new ways that LVars can free memory, using a concept of Saturating LVars. Second, this research seeks to formalize the relationship of LVar-based parallel programs to their purely functional counterparts, characterizing asymptotic performance advantages. Finally, this project explores the scalability of LVar-based programming abstractions in a distributed memory setting, where they share similarities with recent distributed programming constructs such as concurrent replicated data structures.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Parallelism allows software to run faster, especially on future hardware. This project sought to make parallel programming more widespread by making it safer. If parallelism is time consuming and risks introducing bugs, then we will continue to see it only applied in select circumstances, rather than all software. This project refined and explored a particular form of safe parallel programming, based on lattice-variables (LVars). These store information, shared between different threads of control in the program, but in a manner that is nondestructive. Different parts of the program collaboratively add information, but do not destroy it. LVars are specially constrained so that all the interactions through them, while unpredictably ordered, come out the same in the end, yielding reproducible, deterministic program results.
In a series of publications, we explored the limits of the LVars model, for example extending them to release memory in an unintuitive contrast to the "adding information" description above (PPoPP'16). We improved the implementation of LVars and tested their scalability limits. For example, we explored adaptive versions of LVar implementations (ICFP'15), which conform their implementation to the workload they observe at runtime. We performed research further enhancing the safety of LVar implementations by formally verifying mathematical properties of these datatypes (POPL'18, Haskell'19).
This research then forked off in three directions, leading to follow-on work. First, the verification effort became a broader approach for combining formal methods and parallelism (NSF award #1909862). Second, the effort to derive a practical, system-level implementation of LVars turned into a broader effort to build an efficient compiler and runtime for functional programs (i.e. the Gibbon compiler, and NSF award #1725679). Third, the effort to accomplish deterministic parallelism with LVars turned into a broader effort to dynamically enforce deterministic parallelism (OOPSLA'17, ASPLOS'20, and a successful startup company).
Last Modified: 11/22/2020
Modified by: Ryan R Newton
Please report errors in award information by writing to: awardsearch@nsf.gov.