Award Abstract # 2119069
Collaborative Research: PPoSS: LARGE: Unifying Software and Hardware to Achieve Performant and Scalable Frictionless Parallelism in the Heterogeneous Future

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: NORTHWESTERN UNIVERSITY
Initial Amendment Date: July 30, 2021
Latest Amendment Date: August 5, 2024
Award Number: 2119069
Award Instrument: Continuing Grant
Program Manager: Anindya Banerjee
abanerje@nsf.gov
 (703)292-7885
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2021
End Date: September 30, 2026 (Estimated)
Total Intended Award Amount: $1,927,454.00
Total Awarded Amount to Date: $1,959,454.00
Funds Obligated to Date: FY 2021 = $1,466,740.00
FY 2022 = $16,000.00

FY 2023 = $16,000.00

FY 2024 = $460,714.00
History of Investigator:
  • Peter Dinda (Principal Investigator)
    pdinda@northwestern.edu
  • Nikos Hardavellas (Co-Principal Investigator)
  • Simone Campanoni (Co-Principal Investigator)
Recipient Sponsored Research Office: Northwestern University
633 CLARK ST
EVANSTON
IL  US  60208-0001
(312)503-7955
Sponsor Congressional District: 09
Primary Place of Performance: Northwestern University
2233 Tech Drive
Evanston
IL  US  60208-0001
Primary Place of Performance
Congressional District:
09
Unique Entity Identifier (UEI): EXZVPWZBLUE8
Parent UEI:
NSF Program(s): PPoSS-PP of Scalable Systems
Primary Program Source: 01002324DB NSF RESEARCH & RELATED ACTIVIT
01002122DB NSF RESEARCH & RELATED ACTIVIT

01002425DB NSF RESEARCH & RELATED ACTIVIT

01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 026Z, 9251
Program Element Code(s): 042Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Exploiting parallelism is essential to making full use of computer systems, from phones to supercomputers. It is thus intrinsic to most applications today, and is becoming increasingly so with time, especially as hardware becomes more heterogeneous. Programming effective and performant parallel applications remains a serious challenge, however. Achieving both high productivity and high performance currently requires multiple experts. The project seeks to reduce this to an ordinary programmer. This problem is often approached along only one of two lines, "theory down", focusing on high-level parallel languages and the theory and practice of parallel algorithms, or "architecture up", focusing on rethinking abstractions at multiple layers, starting with the hardware. The project?s core novelties are (1) to unify these two approaches, combining their strengths to reduce the expertise needed to write performant parallel programs, and (2) to develop integrated techniques that can enable taking advantage of heterogeneous hardware. Realizing these novelties will require designing a "full-stack" approach to parallelism and innovation across the hardware/software stack. The project's impacts are (1) the development of techniques that dramatically simplify parallel programming, including for heterogeneous machines, putting it into the purview of the ordinary programmer, and (2) the development of systems and educational materials to teach this skill to broader audiences including students at the researchers' institutions.

The technical strategy of the project is to bridge high-level parallel languages, which allow clean expression and analysis of program parallelism, to heterogeneous, extensible hardware (modeled using FPGAs) through an integrated series of intermediate representations (IRs) of a program and of the hardware/software capabilities of the target platform. The design of these representations will be geared to avoid the information loss (going both up and down the compiler/runtime/OS/hardware stack) that currently hampers optimization at all levels. A new compilation model for high-level parallel languages is being developed that extensively leverages modern compiler technology, but also avoids "premature lowering" of parallel constructs, and "premature abstraction" of hardware and low-level software features. Benchmarks are beinge developed to measure the effectiveness of the approach.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 18)
Arora, Jatin and Muller, Stefan K and Acar, Umut A "Disentanglement with Futures, State, and Interaction" Proceedings of the ACM on Programming Languages , v.8 , 2024 https://doi.org/10.1145/3632895 Citation Details
Deiana, Enrico Armenio and Suchy, Brian and Wilkins, Michael and Homerding, Brian and McMichen, Tommy and Dunajewski, Katarzyna and Dinda, Peter and Hardavellas, Nikos and Campanoni, Simone "Program State Element Characterization" International Symposium on Code Generation and Optimization , 2023 https://doi.org/10.1145/3579990.3580011 Citation Details
Filipiuk, Thomas and Wanninger, Nick and Dhiantravan, Nadharm and Surmeier, Carson and Bernat, Alex and Dinda, Peter "CARAT KOP: Towards Protecting the Core HPC Kernel from Linux Kernel Modules" Proceedings of the 13th International Workshop on Runtime and Operating Systems for Supercomputers (ROSS 2023) , 2023 https://doi.org/10.1145/3624062.3624237 Citation Details
Kandiah, Vijay and Lustig, Daniel and Villa, Oreste and Nellans, David and Hardavellas, Nikos "Parsimony: Enabling SIMD/Vector Programming in Standard Compiler Flows" Proceedings of the 21st ACM/IEEE International Symposium on Code Generation and Optimization , 2023 https://doi.org/10.1145/3579990.3580019 Citation Details
Lin, Zhenpeng and Yu, Zheng and Guo, Ziyi and Campanoni, Simone and Dinda, Pete and Xing, Xinyu "CAMP: Compiler and Allocator-based Heap Memory Protection" , 2024 Citation Details
Manohar, Magdalen Dobson and Shen, Zheqi and Blelloch, Guy and Dhulipala, Laxman and Gu, Yan and Simhadri, Harsha Vardhan and Sun, Yihan "ParlayANN: Scalable and Deterministic Parallel Graph-Based Approximate Nearest Neighbor Search Algorithms" , 2024 https://doi.org/10.1145/3627535.3638475 Citation Details
Matni, Angelo and Deiana, Enrico Armenio and Su, Yian and Gross, Lukas and Ghosh, Souradip and Apostolakis, Sotiris and Xu, Ziyang and Tan, Zujun and Chaturvedi, Ishita and Homerding, Brian and McMichen, Tommy and August, David I. and Campanoni, Simone "NOELLE Offers Empowering LLVM Extensions" 2022 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) , 2022 https://doi.org/10.1109/CGO53902.2022.9741276 Citation Details
McMichen, Tommy and Greiner, Nathan and Zhong, Peter and Sossai, Federico and Patel, Atmn and Campanoni, Simone "Representing Data Collections in an SSA Form" , 2024 https://doi.org/10.1109/CGO57630.2024.10444817 Citation Details
Muller, Stefan_K and Singer, Kyle and Keeney, Devyn_Terra and Neth, Andrew and Agrawal, Kunal and Lee, I-Ting_Angelina and Acar, Umut_A "Responsive Parallelism with Synchronization" Proceedings of the ACM on Programming Languages , v.7 , 2023 https://doi.org/10.1145/3591249 Citation Details
Su, Yian and Rainey, Mike and Wanninger, Nick and Dhiantravan, Nadharm and Liang, Jasper and Acar, Umut A and Dinda, Peter and Campanoni, Simone "Compiling Loop-Based Nested Parallelism for Irregular Workloads" ASPLOS '24: Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems , 2024 https://doi.org/10.1145/3620665.3640405 Citation Details
Tauro, Brian R and Suchy, Brian and Campanoni, Simone and Dinda, Peter and Hale, Kyle C "TrackFM: Far-out Compiler Support for a Far Memory World" , 2024 https://doi.org/10.1145/3617232.3624856 Citation Details
(Showing: 1 - 10 of 18)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page