Skip to feedback

Award Abstract # 0910847
Flash Gordon: A Data Intensive Computer

NSF Org: OAC
Office of Advanced Cyberinfrastructure (OAC)
Recipient: UNIVERSITY OF CALIFORNIA, SAN DIEGO
Initial Amendment Date: September 16, 2009
Latest Amendment Date: June 7, 2016
Award Number: 0910847
Award Instrument: Cooperative Agreement
Program Manager: Edward Walker
edwalker@nsf.gov
 (703)292-4863
OAC
 Office of Advanced Cyberinfrastructure (OAC)
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2009
End Date: March 31, 2017 (Estimated)
Total Intended Award Amount: $20,000,000.00
Total Awarded Amount to Date: $21,448,912.00
Funds Obligated to Date: FY 2009 = $11,007,882.00
FY 2010 = $9,288,560.00

FY 2012 = $23,750.00

FY 2014 = $44,910.00

FY 2015 = $1,083,809.00
History of Investigator:
  • Michael Norman (Principal Investigator)
    mlnorman@ucsd.edu
  • Wayne Pfeiffer (Co-Principal Investigator)
  • Allan Snavely (Former Co-Principal Investigator)
  • Steven Swanson (Former Co-Principal Investigator)
  • Shawn Strande (Former Co-Principal Investigator)
Recipient Sponsored Research Office: University of California-San Diego
9500 GILMAN DR
LA JOLLA
CA  US  92093-0021
(858)534-4896
Sponsor Congressional District: 50
Primary Place of Performance: University of California-San Diego
9500 GILMAN DR
LA JOLLA
CA  US  92093-0021
Primary Place of Performance
Congressional District:
50
Unique Entity Identifier (UEI): UYTTZT6G9DT1
Parent UEI:
NSF Program(s): CYBERINFRASTRUCTURE,
Innovative HPC
Primary Program Source: 01000910DB NSF RESEARCH & RELATED ACTIVIT
01001011DB NSF RESEARCH & RELATED ACTIVIT

01001112DB NSF RESEARCH & RELATED ACTIVIT

01001213DB NSF RESEARCH & RELATED ACTIVIT

01001314DB NSF RESEARCH & RELATED ACTIVIT

01001415DB NSF RESEARCH & RELATED ACTIVIT

01001516DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7619, 9145, 9215, 9251, HPCC
Program Element Code(s): 723100, 761900
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

ABSTRACT UCSD 0910847 Norman, Michael L.

This project supports the acquisition, deployment and operation of a new supercomputing system suitable for data-intensive applications. The system, to be known as Flash Gordon, will be deployed by the University of California at San Diego at the San Diego Supercomputer Center and integrated into the TeraGrid. The system, which has been designed by Appro International Incorporated, with partners Intel and ScaleMP, seeks to bridge the widening latency gap between main memory and rotating disk storage in modern computing systems. It uses flash memory to provide a level of dense, affordable, low-latency storage that can be configured as either extended swap space or a very fast file system. The system will consist of very large shared virtual-memory, cache-coherent "super-nodes" to support a versatile set of programming paradigms. Peak performance will exceed 200 teraflops/s in double precision.

Flash Gordon's large addressable virtual memory, low-latency flash memory, and user-friendly programming environment will provide a step-up in capability for data-intensive applications that scale poorly on current large-scale architectures, providing a resource that will enable transformative research in many research domains. Even sequential codes will be able to address up to terabytes of fast virtual memory.

Examples of scientific challenges, as described in the proposal, that this resource will allow researchers to tackle, include the following.

De Novo Genome Assembly: Gene sequencers produce information about many small fragments of a genome. Some recent assembly algorithms use a graph-based approach, much more readily executed on a shared-memory system. Using Flash Gordon, researchers will be able to rapidly assemble complex genomes such as mammalian genomes.

Astronomy: Modern astronomy databases can be large; for example, the Sloan Digital Sky Survey is approximately six terabytes in size. Typically, the analysis algorithms that researchers use to perform complex searches for astronomical phenomena can be implemented more easily on shared-memory systems. Flash Gordon will enable researchers to load a copy of the Sloan Digital Sky Survey into the flash memory associated with a super-node, greatly extending the types of analyses astronomers can make.

Astrophysics: Cosmological simulations produce many terabytes of output describing the simulated universe. Detailed analysis of the results of these simulations, to find features such as collapsed halos, galaxy mergers, dwarf galaxies, and galaxy clusters, often requires density-based cluster analysis that does not parallelize well. With Flash Gordon, these analyses can be accelerated by exploiting the large SMP partitions and fast flash memory.

Interaction Networks: Interaction networks, graphs representing the relationships between objects, are used in research in areas such as epidemiology, phylogenetics, systems biology, and population biology. These interaction networks can represent relationships between types of data stored in different databases; for example, the combination of social network databases with medical records and genomic profiles to explore questions such as genetic resistance to disease. Flash Gordon will speed analysis of large interaction networks because the databases can be stored on the solid-state disks, greatly reducing access time and permitting more complex types of analysis.

The project team will leverage a number of ongoing educational activities at UCSD to expand and diversify the community of users that can utilize this computational resource, including successful outreach programs for women and minorities from underrepresented groups in science and engineering. The project will also create a summer training program for undergraduates.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 17)
Huang, J., and A. D. MacKerell "Induction of Peptide Bond Dipoles Drives Cooperative Helix Formation in the (AAQAA)3 Peptide" Biophysical Journal , v.107 , 2014 , p.991 10.1016/j.bpj.2014.06.038
Allen, S. E., S.-Y. Hsieh, O. Gutierrez, J. W. Bode, and M. C. Kozlowski "Concerted Amidation of Activated Esters: Reaction Path and Origins of Selectivity in the Kinetic Resolution of Cyclic Amines via N-Heterocyclic Carbenes and Hydroxamic Acid Cocatalyzed Acyl Transfer" Journal of the American Chemical Society , v.136 , 2014 , p.11783 10.1021/ja505784w
Ardekani, S., Jain, S., Sanzi, A., Corona-Villalobos, C., Abraham, T., et al. "Shape analysis of hypertrophic and hypertensive heart disease using MRI-based 3D surface models of left ventricular geometry." Medical image analysis , v.29 , 2016 , p.12 10.1016/j.media.2015.11.004
Bae, J., R. P. Nelson, L. Hartmann, and S. Richard "SELF-DESTRUCTING SPIRAL WAVES: GLOBAL SIMULATIONS OF A SPIRAL-WAVE INSTABILITY IN ACCRETION DISKS" The Astrophysical Journal , 2016 10.3847/0004-637x/829/1/13
Huang, J., P. E. M. Lopes, B. Roux, and A. D. MacKerell "Recent Advances in PolarizableForce Fields for Macromolecules: Microsecond Simulations of Proteins Using the Classical Drude Oscillator Model" The Journal of Physical Chemistry Letters , v.5 , 2014 , p.3144 10.1021/jz501315h
Kumar, P., S. A. Bojarowski, K. N. Jarzembska, S. Domaga?a, K. Vanommeslaeghe, A. D.MacKerell, and P. M. Dominiak "A Comparative Study of Transferable AsphericalPseudoatom Databank and Classical Force Fields for Predicting Electrostatic Interactions in Molecular Dimers" Journal of Chemical Theory and Computation , v.10 , 2014 , p.1652 10.1021/ct4011129
Lakkaraju, S. K., E. P. Raman, W. Yu, and A. D. MacKerell "Sampling of Organic Solutes in Aqueous and Heterogeneous Environments Using Oscillating Excess Chemical Potentials in Grand Canonical-like Monte Carlo-Molecular Dynamics Simulations" Journal of Chemical Theory and Computation , v.10 , 2014 , p.2281 10.1021/ct500201y
Natasha Balac "?Green Machine? Intelligence: Sustaining Smart Grids" IEEE Intelligent Systems , v.28 , 2013 , p.50 10.1109/MIS.2013.127
Pan, L., and S. G. Aller "Equilibrated Atomic Models of Outward-Facing P-glycoprotein and Effect of ATP Binding on Structural Dynamics" Scientific Reports , 2015 10.1038/srep07880
Patel, D. S., R. Pendrill, S. S. Mallajosyula, G. Widmalm, and A. D. MacKerell "Conformational Properties of ?- or ?-(1?6)-Linked Oligosaccharides: Hamiltonian Replica Exchange MD Simulations and NMR Experiments" The Journal of Physical Chemistry B , v.118 , 2014 , p.2851 10.1021/jp412051v
Patel, D. S., X. He, and A. D. MacKerell "Polarizable Empirical Force Field forHexopyranose Monosaccharides Based on the Classical Drude Oscillator" The Journal of Physical Chemistry B , v.119 , 2015 , p.637 10.1021/jp412696m
(Showing: 1 - 10 of 17)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The San Diego Supercomputer Center at the University of California, San Diego deployed the Gordon data-intensive supercomputer during 2011. After extensive testing, SDSC operated Gordon for allocated access by academic researchers, educators, and students through NSF’s XSEDE project from March 2012 to March 2017. Some NSF-funded projects continue to use Gordon, with its operation currently funded by the Simons Foundation. During its 61 months of operation as an XSEDE resource, it ran 2.3 million jobs; it provided 450 million core hours for more than 9,000 users, most of whom gained access via a gateway rather than via the command line; and it enabled publication of hundreds of scientific papers.

Gordon has a peak speed of 341 Tflop/s delivered by 16,384 cores in 1,024 compute nodes, each with 64 GB of DRAM. Its signature feature is 300 TB of flash memory served by 64 I/O nodes, with 300 GB accessible to each compute node in the normal configuration.

Gordon was designed by SDSC in collaboration with vendor partners Appro, Intel, Mellanox, and ScaleMP. The design incorporated three significant technology innovations:

  • Extensive use of flash memory via SSDs to bridge the latency gap in the memory hierarchy between DRAM and disk, thereby greatly improving performance for analyses that generate and reuse large amounts of intermediate data.
  • Virtual shared-memory via the vSMP software, which enables large-memory analyses in a cost-effective manner by aggregating up to 1 TB of DRAM from 16 separate compute nodes.
  • A dual-rail, 3D torus interconnect with 4x4x4 topology that uses separate rails for message passing and I/O to minimize interference between them.

With faster processors and more DRAM per node than on most other XSEDE resources when operation of Gordon began, it was very attractive to users from a broad swath of science and engineering disciplines. The large amount of DRAM was used to obtain a better understanding of the structure of viruses in two studies. Researchers from Cornell and TSRI reconstructed the HK97 virus from microscopy data, while researchers from UCSD modeled the surface proteins of avian flu virus.

Flash memory was widely used as an alternative to disk for I/O of intermediate data generated during data-intensive analyses. Researchers doing quantum chemistry and materials science analyses made extensive use of this capability to store and retrieve the large numbers of electron integrals that arise during such analyses. A research group at Harvard screened millions of candidate semiconductors to find better photovoltaic materials for solar cells. Each candidate was analyzed in a separate job, with hundreds of jobs running simultaneously, all using flash memory to improve performance and minimize the impact on other users that disk I/O would entail.

Flash memory was also used as persistent storage for several projects that received dedicated allocations of an I/O node, and use by the following two projects continues. The IODA project, operated by CAIDA at SDSC, monitors Internet traffic data to detect hacker activity or connectivity disruptions. Deploying the IODA databases on the flash memory of an I/O node resulted in an 18x speedup compared to using disk. The OpenTopography gateway, another project hosted at SDSC, provides persistent access to topography data from high-resolution LiDAR and to tools for analyzing those data.

Virtual shared memory enabled by the vSMP software was used by researchers who needed even more DRAM than in a single compute node or who could benefit from using more than 16 cores in a shared-memory analysis. Researchers from Cornell studied the stress response of a vertebra using a high-resolution, finite-element model. Storing the resulting 750-GB stiffness matrix in DRAM aggregated across multiple compute nodes allowed the analysis to complete more than 10x faster than splitting the matrix between 64 GB of DRAM in a single compute node and 700 GB of disk. Researchers from UC Irvine and UCSD analyzed a large network in mathematical anthropology using 256 cores in a shared-memory model, while other researchers at UCSD post-processed large cosmological simulations using both 256 cores and 256 GB of shared memory.

Science gateways, especially the CIPRES gateway for phylogenetic research, accounted for an increasing percentage of users and enabled many significant results during the operation of Gordon as an XSEDE resource. A research team led from UC Berkeley generated a new tree of life that revealed the existence of a massive new superphylum of bacteria.

Tens of outreach activities exposed thousands of researchers, educators, and students to the benefits of data-intensive computing, especially those enabled by Gordon. SDSC staff hosted tutorials at scientific meetings, workshops at SDSC and on other university campuses, and annual summer institutes. These activities targeted potential users of Gordon in varied disciplines, especially fields such as finance and ecology that had previously made little use of high-performance computing. University faculty also used Gordon for classes in parallel processing and data science.

 


Last Modified: 06/28/2017
Modified by: Michael L Norman

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page