
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | April 16, 2018 |
Latest Amendment Date: | May 8, 2018 |
Award Number: | 1833291 |
Award Instrument: | Continuing Grant |
Program Manager: |
Marilyn McClure
mmcclure@nsf.gov (703)292-5197 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 13, 2017 |
End Date: | May 31, 2019 (Estimated) |
Total Intended Award Amount: | $9,100.00 |
Total Awarded Amount to Date: | $9,100.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
101 COMMONWEALTH AVE AMHERST MA US 01003-9252 (413)545-0698 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
100 Venture Way Hadley MA US 01035-9450 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | CSR-Computer Systems Research |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Computers with many tens to hundreds of ?cores? are on their way, but programming languages and tools
that exploit them well have lagged. At the same time, there are emerging programming languages intended
for writing programs to run on these computers. These languages, such as X10 and Fortress, add support for
new concepts that make it easier to write many-core programs, but there does not yet exist good compiler and
run-time support for these languages. Systems that run Java, namely Java virtual machines such as those that
run on virtually every laptop, desktop, and server today, supply much of what the new languages need, but
fall short in some important ways. In particular they do not provide for saying in which part of memory to
place particular objects, on which core to run which computations, easy ways to get all cores busy working
on different parts of a big piece of data, or for synchronizing and getting right all the data manipulations
happening at the same time. This project is extending an existing research Java virtual machine (Jikes
RVM) with support for many ways of doing the things that the new languages need in order to run well
on many-core computers. The primary goal is to devise extensions to standard Java virtual machines for
this new world, and to make it possible for many others to experiment with different ways of implementing
these extensions, thus leveraging the creativity of the whole community of language and virtual machine
researchers. Secondary goals include offering reasonably good initial implementations of virtual machine
extensions as a starting point for future research and development, and proposing specific extensions to the
Java virtual machine specification standard.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This project was a supplement to offer Research Experiences for Undergraduates. Two undergraduates helped run benchmark programs and collect useful detailed information about actions the programs perform as they run, which informs performance analysis and prediction of future performance. The information cllected to date is only for a subset of the available benchmarks and inputs in the suite used, so it would be helpful to continue this project to complete the set of information.
The set of programs: An independent industry consortium, the Systems Performance Evaluation Corporation (SPEC), publishes a range of benchmark suites. We developed analyses some years ago of the SPEC CPU 2006 suite, and the present project repeats and extends that work with the more modern SPEC CPU 2017 suite.
Program inputs: Each SPEC benchmark offers three "sizes" of inputs: test, train, and reference (ref). These are in rough increase in length/time. So far we have collected test and train from every benchmark and are in the middle of collecting the ref runs, which are significantly larger (weeks for each run in our measurement framework). In addition, Prof. Nelson Amaral of the Univ of Alberta has assembled a collection of additional inputs, called the Alberta Workloads. We wish also to measure these - the larger a coherent collection one has, the better analysis and prediction one is likely to achieve.
Last Modified: 09/30/2019
Modified by: J. Eliot B Moss
Please report errors in award information by writing to: awardsearch@nsf.gov.