Award Abstract # 1338099
MRI: Acquisition of Big-Data Private-Cloud Research Cyberinfrastructure (BDPC)

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: WILLIAM MARSH RICE UNIVERSITY
Initial Amendment Date: August 30, 2013
Latest Amendment Date: August 30, 2013
Award Number: 1338099
Award Instrument: Standard Grant
Program Manager: Rita Rodriguez
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2013
End Date: September 30, 2016 (Estimated)
Total Intended Award Amount: $400,000.00
Total Awarded Amount to Date: $400,000.00
Funds Obligated to Date: FY 2013 = $400,000.00
History of Investigator:
  • Moshe Vardi (Principal Investigator)
    vardi@rice.edu
  • Lydia Kavraki (Co-Principal Investigator)
  • Ashok Veeraraghavan (Co-Principal Investigator)
  • Genevera Allen (Co-Principal Investigator)
  • Stephen Bradshaw (Co-Principal Investigator)
Recipient Sponsored Research Office: William Marsh Rice University
6100 MAIN ST
Houston
TX  US  77005-1827
(713)348-4820
Sponsor Congressional District: 09
Primary Place of Performance: William Marsh Rice University
6100 Main St
Houston
TX  US  77005-1827
Primary Place of Performance
Congressional District:
09
Unique Entity Identifier (UEI): K51LECU1G8N3
Parent UEI:
NSF Program(s): Major Research Instrumentation
Primary Program Source: 01001314DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1189
Program Element Code(s): 118900
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Proposal #: 13-38099
PI(s): Vardi, Moshe; Allen, Genevera; Bradshaw, Stephen; Karaki, Lydia; Veeraraghavan, Ashok
Institution: Rice University
Title: MRI/Acq.: Big-Data Private Cloud Research Cyberinfrastructure (BDPC)
Project Proposed:
This project, acquiring a novel cyber-infrastructure instrument for big data cloud computing designed as a loosely coupled computations system with large memory requirements, enables a significant range of application domains as well as research into infrastructures for cloud computing. The domain sciences addressed range from development of big-data enabling software technologies spanning fundamental computer science work, to the analysis of electronic medical records, twitter streams, and hurricane evacuation strategies. Additional benefits are expected in understanding disease and therapeutic treatments, and in the development and application of mathematical models in the areas of machine learning, optimization, compressed sensing, image processing, and statistical analysis and data mining. The instrument will also help bridge the gap between numerical models and observations in astrophysics.
Broader Impacts:
The broader impacts on society, and especially in education and training (including for members of underrepresented groups) are all compelling. The instrument will directly impact the educational experience for all students taking classes in computing and computational problem solving. The targeted research communities are diverse and broad, including the underrepresented groups, with strong empirical and experimental components. The proposed instrument is highly suitable for training and education.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 23)
B.J. Sirovetz, N.P. Schafer, and P.G. Wolynes "Water Mediated Interactions and the Protein Folding Phase Diagram in the Temperature-Pressure Plane" J. Phys. Chem. B , v.119 , 2016 , p.11416-114
H. He, H. Fang, M.D. Miller, G.N. Phillips Jr and W.-P. Su "Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm" Acta Crystallogr; http://scripts.iucr.org/cgi-bin/paper?sc5096 , v.A72 , 2016 , p.539-547 10.1107/S2053273316010731
H. Lammert, J.K. Noel, E. Haglund, A. Schug, and J.N. Onuchic "Constructing a folding model for protein S6 guided by native fluctuations deduced from NMR structures" J Chem Phys , v.143 , 2015
H. Robatjazi, S.M. Bahauddin, C. Doiron, and I. Thomann "Direct Plasmon-Driven Photoelectrocatalysis" Nano Lett; http://pubs.acs.org/doi/full/10.1021/acs.nanolett.5b02453 , v.15 , 2015 , p.6155-6161
H. Robatjazi, S.M. Bahauddin, L.H. Macfarlan, S. Fu, and I. Thomann "Ultrathin AAO Membrane as a Generic Template for Sub-100 nm Nanostructure Fabrication" Chem. Mater; http://pubsdc3.acs.org/doi/full/10.1021/acs.chemmater.6b00722 , v.28 , 2016 , p.4546-4553
J. Feng, C.A. Jones, M. Cibula, E.A. Krnacik, D.H. McIntyre, H. Levine, and B. Sun "Micromechanics of cellularized biopolymer networks" Proceedings of the National Academy of Sciences , v.112 , 2015 , p.E5117-E51
J. Feng, H. Levine, X. Mao, and L.M. Sander "Nonlinear elasticity of disordered fiber networks" Soft Matter , v.12 , 2016 , p.1419-1424
J. Feng, S. Sevier, B. Huang, D. Jia, and H. Levine "Modeling delayed processes in biological systems" Physical Review E , v.94 , 2016 032408
J. Holloway, M. S. Asif, M. K. Sharma, N. Matsuda, R. Horstmeyer, O. Cossairt and A. Veeraraghavan "Toward Long-Distance Subdiffraction Imaging Using Coherent Camera Arrays" in IEEE Transactions on Computational Imaging , v.2 , 2016 , p.251-265 10.1109/TCI.2016.2557067
J.K. Noel, M. Levi, M. Ragunathan, H. Lammert, R. Hayes, J.N. Onuchic, and P.C. Whitford "SMOG 2: A Versatile Software Package for Generating Structure-Based Models" PLOS Comp Biol. , v.12 , 2016
M. Chen, M. Tsai, W. Zheng, and P.G. Wolynes "The Aggregation Free Energy Landscapes of Polyglutamine Repeats" Journal of the American Chemical Society , 2016 10.1021/jacs.6b08665
(Showing: 1 - 10 of 23)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Today, computation is a cornerstone of scientific inquiry, as important to science and engineering as the traditional approaches of theory and experimentation. To support computational research and scholarship across science and engineering, we procured, deployed and are now operating a high-throughput computing resource at Rice. This newly deployed system replaced old inefficient, end-of-life infrastructure with limited capabilities which rendered it too costly to operate for what it delivered and inadequate for supporting the forward looking federally funded research and development by researchers at Rice University and their collaborators.

 The deployed system, co-funded by Rice University, consists of a total of 88 dual-socket HPE Linux-based server nodes interconnected with a 10gb/s network. The system is supported by a 220 TeraByte (TB) Lustre scratch file system and a 50 TB NFS file system for longer term data storage. Rice plans to operate this resource for 5 years (60 months).

 After 18 months in full production, the system has delivered over 12 million core hours supporting the computationally demanding research of fifty faculty members and their associated three hundred plus users. The user community is continuing to grow and we expect, based on past experience, that within the next 6-12 months the number of faculty members and users supported will likely double. Faculty members supporting users on the system today are developing computational techniques to advance science and engineering with direct impact on many disciplines, including astrophysics, computer-aided verification, machine vision, robotics, and statistical machine learning. The system has already enabled significant student training, not only though serving as a platform for accelerating research and discovery, but by serving as a resource used in seven classes across engineering (reaching over 250 students). While usage by classes is very small as a percentage of total time, it is a critical part of preparing students for careers that are increasingly dependent on computing skills.

 While national supercomputing resources, such as XSEDE, can provide the research community with substantial computing capabilities, the investigators needed a local modest-size resource that would complement XSEDE in order to facilitate development, small scale experimentation, training and support the long tail of research computing. In particular, the higher bandwidth and lower latency to the desktop from a local system better supports both code development and interactive data analysis using tools with graphical user interfaces.

 This system has enabled Rice to successfully deploy and operate an on-premises computational infrastructure in support of expanding demand for throughput computing to facilitate rapidly growing simulation, modeling, big-data data, and analytics computational experiments. Massive throughput computing is a type of computational workload that is generally too large to be efficiently executed on an individual workstation but too small to be competitive or well suited for running on an XSEDE resource. This class of computing needs is currently ideally served by the on-premises infrastructure such as the infrastructure procured and deployed under this award.


Last Modified: 12/02/2016
Modified by: Moshe Y Vardi

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page