Award Abstract # 1909877
CNS Core: Small: Collaborative: Salvaging Commodity Operating Systems toSupport Emerging Networking Technologies

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: THE RESEARCH FOUNDATION FOR THE STATE UNIVERSITY OF NEW YORK
Initial Amendment Date: July 21, 2019
Latest Amendment Date: July 21, 2019
Award Number: 1909877
Award Instrument: Standard Grant
Program Manager: Jason Hallstrom
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2019
End Date: September 30, 2023 (Estimated)
Total Intended Award Amount: $250,000.00
Total Awarded Amount to Date: $250,000.00
Funds Obligated to Date: FY 2019 = $250,000.00
History of Investigator:
  • Hui Lu (Principal Investigator)
    hui.lu@uta.edu
Recipient Sponsored Research Office: SUNY at Binghamton
4400 VESTAL PKWY E
BINGHAMTON
NY  US  13902
(607)777-6136
Sponsor Congressional District: 19
Primary Place of Performance: SUNY at Binghamton
4400 Vestal Pkwy E
Binghamton
NY  US  13902-6000
Primary Place of Performance
Congressional District:
19
Unique Entity Identifier (UEI): NQMVAAQUFU53
Parent UEI: L9ZDVULCHCV3
NSF Program(s): CSR-Computer Systems Research
Primary Program Source: 01001920DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923
Program Element Code(s): 735400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

The networking landscape has changed dramatically along with two main advances: (1) fast hardware has led to high-speed, high-bandwidth computer networks; (2) new networking architectures, such as software-defined networking, have given rise to a flexible way to operate networking services. Unfortunately, traditional systems software, such as commodity operating systems, faces critical challenges to efficiently support such high-speed networks and new networking architectures. This project will conduct a holistic study of network software stacks in commodity operating systems to identify critical bottlenecks, propose new solutions to address these bottlenecks, and finally validate the proposed solutions using real prototype implementations.

Specifically, the project entails three research thrusts: First, to maximize packet-level parallelism, it will develop a stress-testing approach to locate the serialization bottleneck and design a highly efficient pipelining process to parallelize packet processing in virtualized networks. Second, to improve per-packet processing efficiency for small packets, it will develop a multi-level packet coalescing approach, including hardware interrupt coalescing, software interrupt coalescing, and lossless packet coalescing. Third, to strike a good balance between parallelism and data locality, it will design a holistic scheduling algorithm to optimally multiplex in-kernel interrupts and user-level threads for virtualized network functions.

The knowledge developed in this project will help to improve the key aspects of network performance in commodity operating systems, thus benefiting all systems and applications running on these systems. The research outcomes from this project will have influence on the design and implementation of production networking systems and be integrated into core computer science courses. This project will provide training to undergraduate students, graduate students, and students from underrepresented groups.

The project mainly generates four types of data including prototype implementations, software instrumentation benchmarks, detailed reports of empirical evaluations, and curriculum materials. These data will be maintained on the project website during the execution of the project and for a minimum of three years after the project's ending date: http://www.cs.binghamton.edu/~huilu/projects/CNets.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Lei, Jiaxin and Munikar, Manish and Lu, Hui and Jia, Rao "Accelerating Packet Processing in Container Overlay Networks via Packet-level Parallelism" Proceedings IEEE International Parallel and Distributed Processing Symposium , 2023 https://doi.org/10.1109/IPDPS54959.2023.00018 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

High-speed networking technologies have been essential in fulfilling the need for agile and programmable network management, providing low-cost, scalable, and flexible networking solutions. Unfortunately, traditional systems software, such as commodity operating systems, faces critical challenges to efficiently support such high-speed networks and new networking architectures. This project seeks to identify critical bottlenecks of overlay networks, propose new systems solutions to address these bottlenecks, and finally validate the proposed solutions using real prototype implementations. This project has made significant contributions in enhancing the kernel network stack to well support network virtualization.

 

- It conducts comprehensive empirical studies of container overlay networks and identifies the critical parallelization bottlenecks within the kernel network stack that hinder the scalability of container overlay networks.

 

- It explores device-level parallelism with a fast and balanced packet processing approach. The approach pipelines software interrupts associated with different network devices on separate cores, preventing execution of excessive software interrupts from overloading a single core.

 

- It explores packet-level parallelism with a highly efficient packet steering approach. The approach splits the packets from the same flow into multiple micro-flows that can be processed in parallel on multiple cores, while preserving in-order packet delivery with little overhead.

 

- It reduces the complexity and redundancy of the packet processing path with a flow caching approach. The approach caches and reuses a flow’s forwarding decision, allowing its packets to bypass most of network stacks with shortened path and expediated speed.

 

- It discovers new hardware/software co-design opportunities to streamline the processing of overlay network packet, by enabling fine-grained network processing offloading among programmable smartNICs and the host kernel. 

 

The intellectual merit outcome of this project has been to discover previously unknown issues that affect the effectiveness and efficiency of overlay networks and develop novel and practical solutions at the system level, greatly improving the performance of packet processing in container overlay networks to well support network virtualization.

 

This project has broader impact in several ways. The research results have been disseminated through conference proceedings, presentations, and three open-source projects. Work from this project appears in top systems conferences including Eurosys, IPDPS, ICDCS, etc. This project supports several Ph.D. students and undergraduate students, providing valuable research training to these students. The research results have led to the development of a new course in cloud computing and attracted 300+ students over 8 sessions.

 


Last Modified: 12/22/2023
Modified by: Hui Lu

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page