
NSF Org: |
OAC Office of Advanced Cyberinfrastructure (OAC) |
Recipient: |
|
Initial Amendment Date: | September 15, 2011 |
Latest Amendment Date: | March 6, 2017 |
Award Number: | 1134872 |
Award Instrument: | Cooperative Agreement |
Program Manager: |
Robert Chadduck
rchadduc@nsf.gov (703)292-2247 OAC Office of Advanced Cyberinfrastructure (OAC) CSE Directorate for Computer and Information Science and Engineering |
Start Date: | September 1, 2011 |
End Date: | December 31, 2017 (Estimated) |
Total Intended Award Amount: | $27,500,000.00 |
Total Awarded Amount to Date: | $56,000,000.00 |
Funds Obligated to Date: |
FY 2012 = $24,000,000.00 FY 2017 = $4,500,001.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
110 INNER CAMPUS DR AUSTIN TX US 78712-1139 (512)471-6424 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
PO Box 7726 Austin TX US 78713-7726 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Innovative HPC, Leadership-Class Computing |
Primary Program Source: |
01001213DB NSF RESEARCH & RELATED ACTIVIT 01001718DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
The current award is to The University of Texas at Austin to deploy and support Stampede, a HPC Linux cluster with an initial peak performance of 10 petaflops. The system will have 6,400 Dell Stallion servers, with the servers connected by FDR Infiniband. Each server will have dual processors based on Intel?s forthcoming Sandy Bridge architecture with 32 GB of memory. The system will also include a pre-release version of Intel's forthcoming "Knights Corner" co-processors based on the Intel® Many Integrated Core ( Intel® MIC) Architecture: highly parallel co-processors that utilize the x86 instruction set. Stampede will also offer 128 next-generation NVIDIA GPUs for remote visualization, 16 nodes with 1TB of shared memory for large data analysis, and a high-performance file system with 14 petabytes of storage for data-intensive computing. All components will be integrated with a FDR InfiniBand network for extreme scalability. Altogether, Stampede will have a peak performance of over 10 petaflops and over 250 terabytes of memory. Second generation co-processors based on the Intel® MIC Architecture will be added when they become available, increasing Stampede's peak performance to at least 15 petaflops.
This national resource will be available in early 2013 through the NSF Cyberinfrastructure to enable basic research in science and engineering, and will be operated and supported for four years. The project will advance methods for petascale computing including Intel® MIC Architecture performance optimization, and will develop new expertise in data-intensive computing. The award will enable 1000+ projects in computational and data-driven science and engineering projects to advance knowledge in their fields.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This project deployed the Stampede Supercomputer, which debuted as the sixth fastest machine in the world in late 2012, and became the workhorse of the NSF XSEDE computing ecosystem from 2013 through 2017. In it's initial configuration, Stampede delivered 9.6PF of peak performance, and was upgraded to more than 11PF in 2015. Stampede was deployed at the Texas Advanced Computing Center at the University of Texas at Austin, in partnership with Dell Technologies, Intel, and Mellanox. Operations support was also provided by academic partners at Clemson University, Cornell University, Indiana University, The Ohio State University, the University of Colorado at Boulder, and the University of Texas at El Paso.
Over it's lifetime, Stampede served tens of thousands of researchers. By the time the project was finished, Stampede had run more than 8.4 million simulation and data analysis jobs, and had delivered more than 3.4 billion core hours of computation. Nearly 4,000 different projects made use of Stampede; almost 13,000 researchers directly ran jobs on the machine, and tens of thousands more used the machine indirectly through web-based "Science Gateway" interfaces. While Stampede delivered enormous capacity, demand was even higher, with requests for time on the machine from the research community each quarter exceeded capacity by a ratio of more than five to one. User's requesting renewal time on the machine cited over 9,500 papers as being results of their computational research.
Stampede was involved in the solution of many of the largest computational challenges of the last five years. Among the most notable was the role the project played in assisting the LIGO team in their Nobel-prize winning observation of gravitational waves. Stampede performed some of the data analysis calculations for this discovery, and the Stampede team helped to boost performance of the LIGO software stack on both Stampede and other systems. Stampede was also used in support of the scientific computations for the Gordon Bell winning mantle convection run in 2015, which led to groundbreaking results in understanding the connections between mantle convection and continental drift. Stampede was used by industry to develop small satellite launch rockets and offshore oil platforms. Stampede was a key resource in natural hazard response, used extensively in hurricane, tornado, and earthquake forecasting. Stampede was a key enabler in numerous other key science, engineering, and health applications, from analyzing data from the Large Hadron Collider at CERN, to simulating blood flow in capillaries, simulating galaxy formation in the early universe, and development of new nanomaterials.
Last Modified: 05/03/2018
Modified by: Daniel Stanzione
Please report errors in award information by writing to: awardsearch@nsf.gov.