
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | March 4, 2014 |
Latest Amendment Date: | March 4, 2014 |
Award Number: | 1434596 |
Award Instrument: | Continuing Grant |
Program Manager: |
Sol Greenspan
sgreensp@nsf.gov (703)292-7841 CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | July 1, 2013 |
End Date: | July 31, 2016 (Estimated) |
Total Intended Award Amount: | $364,274.00 |
Total Awarded Amount to Date: | $364,274.00 |
Funds Obligated to Date: |
FY 2011 = $101,000.00 FY 2012 = $85,000.00 FY 2013 = $85,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
506 S WRIGHT ST URBANA IL US 61801-3620 (217)333-2187 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
SUITE A 1901 SOUTH FIRST ST. CHAMPAIGN IL US 61820-7473 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Software & Hardware Foundation, SOFTWARE ENG & FORMAL METHODS, Computing in the Cloud |
Primary Program Source: |
01001112DB NSF RESEARCH & RELATED ACTIVIT 01001213DB NSF RESEARCH & RELATED ACTIVIT 01001314DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
Developer testing has been widely recognized as an important and valuable means of improving software reliability, partly due to its capabilities of exposing faults early in the software development life cycle. However, manual developer testing is often tedious and insufficient. Testing tools can be used to enable economical use of resources by reducing manual effort. To maximize the value of developer testing, effective and efficient support for cooperation between developers and tools is greatly needed and yet lacking in state-of-the-art research and practice.
This research aims to create a systematic framework for cooperative developer testing that provides practical techniques and tools, with an integrated research and education plan. In particular, the research addresses fundamental research questions around specification of test intentions by developers to communicate their testing goals or guidance to tools, satisfaction of test intentions by tools, and explanation of intention satisfaction by tools. Test-intention satisfaction and its explanation assist developers in accomplishing not only their testing tasks but also debugging tasks. The framework also helps infer likely test intentions to reduce manual effort in specification of test intentions. Among the broader impacts of the project includes improvement of software reliability and collaboration with industry to transfer technology.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The main goal of the project was to develop a systematic framework for cooperative developer testing that provides practical techniques and tools, with an integrated research and education plan. The project explores synergistic cooperation between developers and testing tools to achieve higher software reliability with lower cost.
The outcomes of this project were a set of new techniques and tools for cooperative developer testing. The project advanced understanding of fundamental issues related to cooperation between developers and testing tools to achieve higher software reliability with lower cost. The project explored new approaches that reduce developers’ cooperation cost by improving tools’ automation capability and by reducing tools’ false warnings for seeking cooperation from developers.
More specifically, we have developed techniques and tools that precisely identify and report problems that prevent the tools from achieving high structural coverage in order to reduce developers’ guidance efforts. We have developed techniques and tools that generate proper method sequences to construct desired objects as method parameters in object-oriented unit test generation. We have developed a methodology that retrofits existing conventional unit tests to parameterized unit tests in order to improve fault-detection capability and code coverage. We have developed techniques and tools that generate various cloud states for achieving effective testing of cloud applications. We have developed techniques and tools that predict loops of workload-dependent performance bottlenecks under large workloads to reduce developers’ inspection effort for performance testing. We have conducted characteristic studies to address loop problems for dynamic symbolic execution to improve tools’ testing effectiveness.
We have collaborated with Microsoft Research on improving an automatic test generation tool called Pex (which was shipped as IntelliTest in the Microsoft Visual Studio 2015 Enterprise Edition). We have collaborated with Microsoft Research on Code Hunt, a serious gaming platform for coding contests and practicing programming skills. Since 2014, Code Hunt has been used by over 350,000 players as of August 2016.
We have disseminated our research results through publications in top venues such as highly-competitive conferences and journals, along with public tool and evaluation-artifact releases and research exchanges. We have successfully trained next-generation researchers via student training and mentoring, and trained next-generation software engineers through undergraduate-level and graduate-level education.
Last Modified: 01/29/2017
Modified by: Tao Xie
Please report errors in award information by writing to: awardsearch@nsf.gov.