Award Abstract # 1541380
CC*DNI Networking Infrastructure: An Software Defined Networking-Enabled Research Infrastructure

NSF Org: OAC
Office of Advanced Cyberinfrastructure (OAC)
Recipient: UNIVERSITY OF KENTUCKY RESEARCH FOUNDATION, THE
Initial Amendment Date: August 25, 2015
Latest Amendment Date: May 9, 2018
Award Number: 1541380
Award Instrument: Standard Grant
Program Manager: Kevin Thompson
kthompso@nsf.gov
 (703)292-4220
OAC
 Office of Advanced Cyberinfrastructure (OAC)
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2015
End Date: September 30, 2018 (Estimated)
Total Intended Award Amount: $486,572.00
Total Awarded Amount to Date: $486,572.00
Funds Obligated to Date: FY 2015 = $486,572.00
History of Investigator:
  • Brian Nichols (Principal Investigator)
    bnichols@uky.edu
  • James Griffioen (Co-Principal Investigator)
  • Vincent Kellen (Former Principal Investigator)
  • Vernon Bumgardner (Former Principal Investigator)
Recipient Sponsored Research Office: University of Kentucky Research Foundation
500 S LIMESTONE
LEXINGTON
KY  US  40526-0001
(859)257-9420
Sponsor Congressional District: 06
Primary Place of Performance: University of Kentucky Research Foundation
500 S Limestone 109 Kinkead Hall
Lexington
KY  US  40526-0001
Primary Place of Performance
Congressional District:
06
Unique Entity Identifier (UEI): H1HYA8Z1NTM5
Parent UEI:
NSF Program(s): Campus Cyberinfrastructure
Primary Program Source: 01001516DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 9150
Program Element Code(s): 808000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

For a growing number of data-intensive research projects spanning a wide range of disciplines, high-speed network access to computation and storage -- located either locally in the campus or in the cloud (e.g., at national labs) -- has become critical to the research. While old, slow, campus network infrastructure is a key contributor to poor performance, an equally important contributor is the problem of bottlenecks that arise at security and network management policy enforcement points in the network.

This project aims to dramatically improve network performance for a wide range of researchers across the campus by removing many of the bottlenecks inherent in the traditional network infrastructure. By replacing the existing network with modern software defined network (SDN) infrastructure, researchers will benefit from both the increased speed of the underlying network hardware and also from the ability to utilize SDN paths that avoid traditional policy enforcement bottlenecks. As a result, researchers across the campus will see significantly faster data transfers, enabling them to more effectively carry out their research.

This project builds on and extends a successful initial SDN deployment at the University of Kentucky, adding SDN switches to ten research buildings and connecting each of them to the existing SDN research network core with 40G uplinks. To ensure high-speed access to the Internet/Cloud, the research core network is being linked through a new 100 Gbps link to Internet2. Research traffic in the enabled buildings is automatically routed onto the SDN research network where SDN flow rules allow research traffic to bypass legacy network infrastructure designed to police normal traffic. By extending the SDN network capabilities to several new buildings on campus, a wide range of researchers are able to archive significantly higher network throughput for their research data.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Hayashida, Mami and Rivera, Sergio and Griffioen, James and Fei, Zongming and Song, Yongwook "Debugging SDN in HPC Environments" PEARC '18 Proceedings of the Practice and Experience on Advanced Research Computing , 2018 10.1145/3219104.3229277
Hayashida, Mami and Rivera, Sergio and Griffioen, James and Fei, Zongming and Song, Yongwook "Debugging SDN in HPC Environments" PEARC '18 Proceedings of the Practice and Experience on Advanced Research Computing , 2018 10.1145/3219104.3229277 Citation Details
Rivera, Sergio and Fei, Zongming and Griffioen, James "POLANCO: Enforcing Natural Language Network Policies" 2020 29th International Conference on Computer Communications and Networks (ICCCN) , 2020 https://doi.org/10.1109/ICCCN49398.2020.9209748 Citation Details
Rivera, Sergio and Griffioen, James and Fei, Zongming and Hayashida, Mami and Shi, Pinyi and Chitre, Bhushan and Chappell, Jacob and Song, Yongwook and Pike, Lowell and Carpenter, Charles and Nasir, Hussamuddin "Navigating the Unexpected Realities of Big Data Transfers in a Cloud-based World" PEARC '18 Proceedings of the Practice and Experience on Advanced Research Computing , v.22 , 2018 10.1145/3219104.3229276 Citation Details
Rivera, Sergio and Griffioen, James and Fei, Zongming and Hayashida, Mami and Shi, Pinyi and Chitre, Bhushan and Chappell, Jacob and Song, Yongwook and Pike, Lowell and Carpenter, Charles and Nasir, Hussamuddin "Navigating the Unexpected Realities of Big Data Transfers in a Cloud-based World" PEARC '18 Proceedings of the Practice and Experience on Advanced Research Computing , 2018 10.1145/3219104.3229276
Rivera, Sergio and Hayashida, Mami and Griffioen, James and Fei, Zongming "Dynamically Creating Custom SDN High-Speed Network Paths for Big Data Science Flows" Practice & Experience in Advanced Research Computing Conference (PEARC 2017) , 2017 10.1145/3093338.3104155 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.


This project addressed the growing challenge of supporting data-intensive scientific research on University campuses.  To improve a researcher's workflow, one can certainly upgrade the campus network to a higher speed, thereby reducing the time needed to transfer the researcher's science data.  However, even with a higher speed campus network research traffic must still compete with general campus traffic for limited bandwidth.  Moreover, conventional campus networks apply the same policies to all forms of network traffic, forcing research traffic to pay the same overheads and pass through the same choke points (e.g., middlebox firewalls, traffic shapers, rate limiters, deep packet inspection boxes, etc.) as non-research traffic.  As a result, research traffic rarely achieves the potential improvements that an upgraded campus network infrastructure should bring.

The typical way to address this problem is to build a special-purpose high speed Science DMZ network outside the University firewalls and then move the University's High Performance Computing system (e.g., the University supercomputer) from the University's production network (a.k.a., campus network) to the new Science DMZ network.  In contrast to the Science DMZ approach, the goal of this project was to transform the University's production network (campus network) into a Software Defined Network (SDN) that could be programmed to carry research traffic at high speed.  In particular, we wanted to use new and emerging SDN capabilities to route research traffic around firewalls and other performance-limiting middleboxes.  Because we are transforming the production network, the new SDN network can provide high-speed not only to the University's HPC systems, but it can also deliver data to/from researchers' desktops and server machines (e.g., file servers and compute servers) all across the campus network at high speed.

Consequently, this project deployed SDN-enabled network equipment across a significant portion of the University of Kentucky campus with the goal of bringing high-speed data transfer capabilities to a wide range of researchers.  In particular, we replaced the existing (production) campus network infrastructure with SDN-enable network infrastructure in several buildings on campus.  We then used network controller software that is able to differentiate between research and non-research traffic, allowing research traffic from any connected system to be routed over high-bandwidth links that bypass middlebox choke points. General non-research traffic on the other hand continues to traverse existing network infrastructure, including policy-enforcing middleboxes.  As a result, a researcher's machine can be transmitting research traffic at high speed (e.g., gigabits per second) while at the same time transmitting non-research traffic at normal speed (e.g., tens to hundreds of megabits per second), with each type of traffic taking an appropriate route across the campus SDN-enabled network infrastructure.  In addition, researchers' machines remain behind, and protected by, the campus security mechanisms by default.  Also, like other software defined networks, network updates and new types of service can be deployed on much shorter timescales.

Our SDN-enabled campus network infrastructure now covers more than a dozen key research buildings on campus, and supports researchers from a wide range of areas including Chemistry, Physics, Biology, Plant and Soil Sciences, Mechanical Engineering, Computer Science, Mining, Statistics, Bio(medical) Informatics, Business and Economics, Communications, and Special Collections researchers.  We compared performance of our new SDN network infrastructure to the previous campus network infrastructure (i.e., non-SDN network) in several of these buildings and found that when transferring data to external Internet sites we would regularly see speeds in the multiple gigabit per second range for SDN, while the previous conventional campus network infrastructure typically operated in the tens or hundreds of megabits per second range.  In other words, we were able to improve researcher file transfer speeds by 1 to 2 orders of magnitude in most cases.  We also worked with researchers to find the best software tools to quickly move data between their desktop/file server and the cloud, and showed that parallel transfer tools could typically be optimized to offer the best performance.


Last Modified: 12/28/2018
Modified by: Brian Nichols

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page