
NSF Org: |
OAC Office of Advanced Cyberinfrastructure (OAC) |
Recipient: |
|
Initial Amendment Date: | September 27, 2006 |
Latest Amendment Date: | September 13, 2011 |
Award Number: | 0622780 |
Award Instrument: | Cooperative Agreement |
Program Manager: |
Robert Chadduck
rchadduc@nsf.gov (703)292-2247 OAC Office of Advanced Cyberinfrastructure (OAC) CSE Directorate for Computer and Information Science and Engineering |
Start Date: | October 1, 2006 |
End Date: | September 30, 2013 (Estimated) |
Total Intended Award Amount: | $58,930,228.00 |
Total Awarded Amount to Date: | $64,733,304.00 |
Funds Obligated to Date: |
FY 2007 = $6,898,757.00 FY 2008 = $6,246,272.00 FY 2009 = $15,088,275.00 FY 2011 = $6,500,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
110 INNER CAMPUS DR AUSTIN TX US 78712-1139 (512)471-6424 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
110 INNER CAMPUS DR AUSTIN TX US 78712-1139 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Innovative HPC, Leadership-Class Computing |
Primary Program Source: |
0100999999 NSF RESEARCH & RELATED ACTIVIT app-0107 01000809DB NSF RESEARCH & RELATED ACTIVIT 01000910DB NSF RESEARCH & RELATED ACTIVIT 0100999999 NSF RESEARCH & RELATED ACTIVIT 01001011DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Proposal: 0622780 PI Name: Boisseau, John R.
ABSTRACT
This award is for the acquisition, deployment and operation of a high-performance computational system for use by the broad science and engineering research and education community. The system, to be known as the Sun Constellation Cluster, will be deployed at the Texas Advanced Computing Center, located at the University of Texas at Austin. The project represents a collaboration between the University of Texas at Austin, Sun Microsystems, Advanced Micro Devices, the Cornell Theory Center at Cornell University, and the Fulton High Performance Computing Institute at Arizona State University.
The Sun Constellation Cluster will greatly increase the combined capacity of the computational resources of the current NSF-funded, shared-use, high-performance computing facilities and provide a capability that is an order of magnitude larger than the largest supercomputer that NSF currently supports. Because of this, it will advance research and education across a broad range of topical areas in science and engineering that use high-performance computing to advance understanding. With this new resource, researchers will study the properties of minerals at the extreme temperatures and pressures that occur deep within the Earth. They will use it to simulate the development of structure in the early Universe. They will probe the structure of novel phases of matter such as the quark-gluon plasma. Such computing capabilities enable the modeling of life cycles that capture interdependencies across diverse disciplines and multiple scales to create globally competitive manufacturing enterprise systems. The system will permit researchers to examine the way proteins fold and vibrate after they are synthesized inside an organism. Sophisticated numerical simulations will permit scientists and engineers to perform a wide range of in silico experiments that would otherwise be too difficult, too expensive or impossible to perform in the laboratory.
High-performance computing of the sort that will be possible with the new system is also essential to the success of research and education conducted with sophisticated experimental tools. For example, without the waveforms produced by numerical simulations of black hole collisions and other astrophysical events, gravitational wave signals cannot be extracted from the data produced by the Laser Interferometer Gravitational Wave Observatory; high-resolution seismic inversions from the higher density of broadband seismic observations furnished by the EarthScope project are necessary to determine shallow and deep Earth structure; simultaneous integrated computational and experimental testing is conducted on the Network for Earthquake Engineering Simulation to improve seismic design of buildings and bridges; and advanced computing capabilities will be essential to extracting the signature of the Higgs boson and supersymmetric particles, two of the scientific drivers of the Large Hadron Collider, from the petabytes of data produced in the trillions of particle collisions.
This project presents an exciting opportunity to advance the type of research described above by: (i) greatly extending the capacity of high-performance computational resources available to the science and engineering communities, (ii) extending the range of advanced computations that can be handled by providing a system with a very large amount of memory, and a very large amount of processing capability. This system will use an architecture that is similar to that present in many academic institutions and to which many science and engineering applications have been ported. In addition, the system represents an important stepping-stone towards the goal of the use of petascale computing in science and engineering research and education at the end of the decade. It will provide a platform that will allow researchers to experiment with techniques for overcoming one of the hurdles in the path to petascale computing, scaling to very large numbers of processors. This computing system will also provide opportunities to many graduate students and post-docs to gain experience in using high-performance computing systems
The Texas Advanced Computing Center and its partners will broaden the impact of the computing resource by: teaching in situ and online classes for undergraduate and graduate students in high-performance computing, visualization, data analysis, and grid computing for computational research in science and engineering; partnering with faculty and students at a number of Minority Serving Institutions to provide training in the use of high-performance computing resources; and collaborating with the Girlstart program, a program that supports and enhances the interest of girls in math, science, and technology.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This project funded the groundbreaking Ranger supercomputer. In February of 2008, Ranger debuted as the most powerful system for open science in the world, with 62,976 processor cores and a peak performance of 579 trillion floating point operations per second. Ranger, the original NSF Track2 project winner at the Texas Advanced Computing Center at the University of Texas at Austin (TACC) provided the open science user community access to enormous computing resources, and was a game-changer in both the scale and the price performance of systems offered to the community. Throughout it’s entire lifespan, Ranger remained in extremely high demand by the community.
By all objective measures, the Ranger project was a remarkable success. This success was acknowledged by the NSF during 2011 with a one year extension to the production life of the project, extending the production life of Ranger from February 2012 to February 2013. By the end of its lifetime, Ranger had delivered more than two and a half million successful jobs to the user community, supporting more than a thousand peer-reviewed funded research projects across all disciplines of engineering and science.
Ranger exceeded virtually every operational metric projected in the original proposal. Uptime significantly exceeded the 95% threshold set by the solicitation. Over the 5 year operational life of the system, over 2.1 billion service units (hours of time on each processor core) were delivered to the national community. More than five thousand scientists and engineers used the system during its lifetime. Throughout the operational period, Ranger was in high demand, with the community making requests for more than four times as much system time on Ranger as could be provided.
In addition to the system itself, TACC staff provided user support, training, system administration, and network administration to ensure continuous availability and proper utilization of these resources. TACC provided 24x7 on-site coverage, with more than 40 staff members worked on the Ranger project, and responding to more than 4,000 Ranger support tickets from the community.
Training, education, and outreach activities reached thousands of participants, including 500 participants a year participating in live training and 1,500 viewing online training each year. More than 150 students enrolled in semester long academic courses in scientific computing taught by TACC staff each year of the project. Approximately 10,000 additional people participated in various tours or outreach activities. The technology evaluation activity yielded dividends through the project which improved the performance of user applications, and has since been translated to subsequent products, including the follow on system Stampede.
The individual scientific impacts of Ranger on projects are too numerous to mention, but we would in particular highlight the impacts of Ranger in computational models to respond to disaster. Ranger was used to do modeling of the 2011 Japanese Earthquake, to model the impacts of the BP oil spill in the Gulf, to design vaccines in response to the Swine Flu threat, to improve tornado prediction models, and was used extensively in hurricane modeling for numerous storms and users, including NOAA. This last category led to the most high profile impact of Ranger, when it was cited in the senate for its impacts. Senator Kay Bailey Hutchinson stated “Of course, I must also note the critical testimony we will hear from Dr. Gordon Wells of the Center for Space Research at the University of Texas. Dr. Wells will testify about his experience using the “...
Please report errors in award information by writing to: awardsearch@nsf.gov.