Email Print Share

Fact Sheet

From Supercomputing to the TeraGrid


April 19, 2006

This material is available primarily for archival purposes. Telephone numbers or other contact information may be out of date; please see current contact information at media contacts.

Early History: 1960s-1980s. The National Science Foundation's (NSF's) investment in the nation's computational infrastructure began in a small way back in the 1960s, when NSF funded a number of campus computing centers. That support was short-lived, however, and by the early 1980s, several reports from the scientific community were noting a dramatic lack of advanced computing resources available to researchers at American universities. By far the most influential was a joint agency study edited by Peter Lax and released in December 1982.

Lax Report (1982): http://www.pnl.gov/scales/docs/lax_report1982.pdf.

The Lax Report catalyzed the emergence of significant new NSF support for high-end computing, which in turn led directly to--

The Supercomputer Centers: 1985-1997. NSF established five of these centers in 1985 and 1986:

  • The Cornell Theory Center (CTC) at Cornell University,
  • The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign,
  • The Pittsburgh Supercomputing Center (PSC) at Carnegie Mellon University and the University of Pittsburgh,
  • The San Diego Supercomputer Center (SDSC) at the University of California, San Diego,
  • The John von Neumann Center at Princeton University.

For the next 12 years, these centers would continue to serve as cornerstones of the nation's High-Performance Computing and Communications strategy. On the one hand, the centers helped push the limits of advanced computing hardware and software, even as they were providing supercomputer access to a broad cross-section of academic researchers, regardless of their discipline or funding agency. And on the other, the centers were instrumental in advancing network infrastructure. In 1986, for example, the centers and the NSF-supported National Center for Atmospheric Research (NCAR) in Colorado became the first nodes on the NSFNET backbone. Then from 1989 to 1995, the Illinois, Pittsburgh and San Diego centers helped push the frontiers of high-speed networking as participants in the then-bleeding-edge Gigabit Network Testbed Projects, which were supported by NSF and DARPA. And finally in 1995, after NSFNET was decommissioned, the centers became the first nodes on NSF's very-high-performance Backbone Network Service (vBNS) for research and education.

In 1990, meanwhile, following a review of the supercomputer centers program, NSF extended support for CTC, NCSA, PSC and SDSC through 1995. Then in 1994, that support was extended again for another two years, through 1997, while a task force chaired by Edward Hayes considered the future of the program.

Hayes Report (1995): http://www.nsf.gov/pubs/1996/nsf9646/nsf9646.htm.

Out of the recommendations of the Hayes report came a new program designed to build on and replace the centers--

Partnerships for Advanced Computational Infrastructure: 1997-2004. The National Science Board announced the two PACI awardees in March 1997:

  • The National Computational Science Alliance: a consortium led by NCSA, with participation by Partners for Advanced Computational Services at Boston University, the University of Kentucky, the Ohio Supercomputer Center, the University of New Mexico, and the University of Wisconsin.
  • The National Partnership for Advanced Computational Infrastructure (NPACI): a consortium led by SDSC, with participation by mid-range computing centers at Caltech, The University of Michigan, and the Texas Advanced Computing Center at The University of Texas at Austin.

In addition to the leading-edge and mid-range sites, the partnerships involved nearly 100 sites across the country in efforts to make more efficient use of high-end computing in all areas of science and engineering. The partnerships also collaborated on the Education, Outreach and Training (EOT) PACI.

The Alliance and NPACI continued to provide academic researchers with access to the most powerful computing resources available. These resources included the first academic teraflops system--a computer capable of 1 trillion operations per second--and some of the first large-scale Linux clusters for academia. At the same time, the partnerships were instrumental in fostering the maturation of grid computing and its widespread adoption by the scientific community and industry. Grid computing connects separate computing resources in order to apply their collective power to solve computationally intensive problems.

The PACI partners were involved in virtually every major grid-computing initiative, from the Grid Physics Network to the National Virtual Observatory to the George E. Brown, Jr. Network for Earthquake Engineering Simulation. The PACI partners were also driving forces in recognizing the critical scientific importance of and the technical challenges in accessing massive data collections. Following the sunset of the PACI program, NSF also continued core support for NCSA and SDSC to make additional large-scale HPC resources available and stimulate the expansion of cyberinfrastructure capabilities for the nation's scientists and engineers.

 

Terascale Initiatives: 2000-2004. In response to a 1999 report by the President's Information Technology Advisory Committee, NSF embarked on a series of "Terascale" initiatives to acquire: (1) computers capable of trillions of operations per second (teraflops); (2) disk-based storage systems with capacities measured in trillions of bytes (terabytes); and (3) networks with bandwidths of billions of bits (gigabits) per second.

In 2000, the $36 million Terascale Computing System award to PSC supported the deployment of a computer (named LeMieux) capable of 6 trillion operations per second. When LeMieux went online in 2001, it was the most powerful U.S. system committed to general academic research. Five years later, it remains a highly productive system.

In 2001, NSF awarded $45 million to NCSA, SDSC, Argonne National Laboratory, and the Center for Advanced Computing Research (CACR) at California Institute of Technology, to establish a Distributed Terascale Facility (DTF). Aptly named the TeraGrid, this multi-year effort aimed to build and deploy the world's largest, fastest, most comprehensive, distributed infrastructure for general scientific research.

The initial TeraGrid specifications included computers capable of performing 11.6 teraflops, disk-storage systems with capacities of more than 450 terabytes of data, visualization systems, data collections, integrated via grid middleware and linked through a 40-gigabits-per-second optical network.

In 2002, NSF made a $35 million Extensible Terascale Facility (ETF) award to expand the initial TeraGrid to include PSC and integrate PSC's LeMieux system. Resources in the ETF provide the national research community with more than 20 teraflops of computing power distributed among the five sites and nearly one petabyte (one quadrillion bytes) of disk storage capacity.

To further expand the TeraGrid's capabilities, NSF made three Terascale Extensions awards totaling $10 million in 2003. The new awards funded high-speed networking connections to link the TeraGrid with resources at Indiana and Purdue Universities, Oak Ridge National Laboratory, and the Texas Advanced Computing Center at The University of Texas, Austin. Through these awards, the TeraGrid put neutron-scattering instruments, large data collections and other unique resources, as well as additional computing and visualization resources, within reach of the nation's research and education community.

In 2004, as a culmination of the DTF and ETF programs, the TeraGrid entered full production mode, providing coordinated, comprehensive services for general U.S. academic research.

The TeraGrid: 2005-2010. In August 2005, NSF's newly created Office of Cyberinfrastructure extended support for the TeraGrid with a $150 million set of awards for operation, user support and enhancement of the TeraGrid facility over the next five years. Using high-performance network connections, the TeraGrid now integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country. As of early 2006, these integrated resources include more than 102 teraflops of computing capability and more than 15 petabytes (quadrillions of bytes) of online and archival data storage with rapid access and retrieval over high-performance networks. Through the TeraGrid, researchers can access over 100 discipline-specific databases. With this combination of resources, the TeraGrid is the world's largest, most comprehensive distributed cyberinfrastructure for open scientific research.

TeraGrid: http://www.nsf.gov/cgi-bin/goodbye?http://www.teragrid.org/.

NSF Office of Cyberinfrastructure: http://nsf.gov/dir/index.jsp?org=OCI.

-NSF-

The U.S. National Science Foundation propels the nation forward by advancing fundamental research in all fields of science and engineering. NSF supports research and people by providing facilities, instruments and funding to support their ingenuity and sustain the U.S. as a global leader in research and innovation. With a fiscal year 2023 budget of $9.5 billion, NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and institutions. Each year, NSF receives more than 40,000 competitive proposals and makes about 11,000 new awards. Those awards include support for cooperative research with industry, Arctic and Antarctic research and operations, and U.S. participation in international scientific efforts.

mail icon Get News Updates by Email 

Connect with us online
NSF website: nsf.gov
NSF News: nsf.gov/news
For News Media: nsf.gov/news/newsroom
Statistics: nsf.gov/statistics/
Awards database: nsf.gov/awardsearch/

Follow us on social
Twitter: twitter.com/NSF
Facebook: facebook.com/US.NSF
Instagram: instagram.com/nsfgov