
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | August 14, 2014 |
Latest Amendment Date: | August 14, 2014 |
Award Number: | 1421612 |
Award Instrument: | Standard Grant |
Program Manager: |
Anindya Banerjee
abanerje@nsf.gov (703)292-7885 CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | September 1, 2014 |
End Date: | August 31, 2018 (Estimated) |
Total Intended Award Amount: | $364,979.00 |
Total Awarded Amount to Date: | $364,979.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1960 KENNY RD COLUMBUS OH US 43210-1016 (614)688-8735 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
OH US 43210-1063 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Software & Hardware Foundation |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Title: SHF: Small: Collaborative Research: Hybrid Static-Dynamic Analyses for Region Serializability
Computer systems' performance has grown exponentially for decades, enabling advances in science, health, engineering, and other areas. However, due to power, heat, and wire-length limitations, chip manufacturers are now producing microprocessors that have more, instead of faster, computing cores. To scale with this increasingly parallel hardware, software systems must become more parallel. However, writing correct, scalable shared-memory programs is notoriously difficult. A key challenge is that modern programming languages and software and hardware systems provide virtually no guarantees for programs that have a common, hard-to-eliminate behavior called data races -- because no one knows how to provide better guarantees while retaining high performance. As a result, software is difficult to reason about and fails unexpectedly, leading to high development and testing costs, and imperiling reliability and security of mission- and safety-critical systems. This project provides stronger guarantees for software, achieving reasonable performance on contemporary systems. The intellectual merits are novel program analyses and runtime support that provide strong behavioral guarantees for programs. The project's broader significance and importance are making software systems automatically more reliable; eliminating whole classes of errors; reducing development and testing costs by simplifying programming; and simplifying and reducing costs of program analyses and software system support. Furthermore, the PIs' educational, mentoring, and outreach activities enhance the project by helping educate a diverse workforce of computer scientists trained in the project's work.
A key contribution is a novel hybrid static-dynamic analysis that enforces a memory model called statically bounded region serializability (SBRS) entirely in software. This memory model is strictly stronger than sequential consistency (SC) and has the potential to be more efficient than SC to enforce, since it allows compilers and hardware to reorder instructions within regions. The project involves designing, implementing, and evaluating (1) three compiler transformations for enforcing SBRS, (2) enhancements to the static-dynamic analysis for performance and flexibility, (3) a novel asynchronous protocol for overlapping concurrency control with program execution while enforcing SBRS, and (4) enhancements to a software transactional memory (STM) system to use the asynchronous protocol to improve scalability. The work provides, for the first time, support for always-on, end-to-end SBRS that is practical, and it makes further advancements in providing high-performance runtime support for atomicity.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
As power and heat limitations prevent processor clock rates from increasing and processors necessarily add more cores to improve performance, software systems must become more parallel to achieve performance gains. However, writing parallel software that both performs well and is reliable, is inherently challenging. The project developed new approaches for automatically improving the robustness of parallel software systems and for finding bugs automatically in parallel software.
The team designed, implemented, and evaluated compiler- and hardware-based techniques for automatically hardening (i.e., eliminating bugs that lead to errors such as crashes) of software systems by enforcing atomicity of executing regions of code and thereby eliminating many erroneous behaviors. The various techniques use a mix of software and hardware approaches that explore the tradeoffs between performance and flexibility. The most practical of the techniques leverages recently available processor support called hardware transactional memory to provide region atomicity efficiently, suggesting that it can be used in practice in production systems.
The team designed, implemented, and evaluated techniques that programmers can use when testing software to find hard-to-detect errors called data races that lead to crashes and other erroneous behaviors. These techniques extend predictive analysis to identify more data races than prior techniques have been able to find, while ensuring that reported data races are real and providing performance competitive with widely used commercial data race detectors that cannot "predict" data races and thus report fewer data races than the project's techniques.
These contributions demonstrate how to make software and hardware systems more reliable and less costly, which has the potential for positive impacts on all domains that rely on computing, including health, science, engineering, education, transportation, and business.
Last Modified: 11/29/2018
Modified by: Michael Bond
Please report errors in award information by writing to: awardsearch@nsf.gov.