
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | August 6, 2012 |
Latest Amendment Date: | August 13, 2013 |
Award Number: | 1205472 |
Award Instrument: | Continuing Grant |
Program Manager: |
Anindya Banerjee
abanerje@nsf.gov (703)292-7885 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 1, 2012 |
End Date: | July 31, 2016 (Estimated) |
Total Intended Award Amount: | $332,104.00 |
Total Awarded Amount to Date: | $332,104.00 |
Funds Obligated to Date: |
FY 2013 = $136,266.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
2200 VINE ST # 830861 LINCOLN NE US 68503-2427 (402)472-3171 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
312 N 14th St, Alex Bldg West Lincoln NE US 68588-0430 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Special Projects - CNS, CCRI-CISE Cmnty Rsrch Infrstrc |
Primary Program Source: |
01001314DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
User interactive event driven software is pervasive today. End users interact with applications by pointing, clicking or touching the interface, and as this happens, the programs respond. Ensuring the dependability of these systems through software testing is paramount, because insufficient testing techniques currently cost the US economy billions of dollars annually. Yet the flexibility of these systems, which make them appealing to users, also increases the difficulty of testing them. This difficulty has fueled a large body of research on user interactive event driven testing, but the newly developed techniques are often evaluated using isolated case studies and experiments. The user interactive event driven testing community lacks a common set of benchmarks and tools for evaluating their new methods, leading to experimental mismatch; the results of one study are difficult to compare with another. The lack of common benchmarks also means that we cannot easily combine results of multiple studies to build a larger body of knowledge.
This project reduces the mismatch and is advancing user interactive event driven testing research by developing a shared research and experimentation web infrastructure called COMET. An initial proof of concept for COMET was developed through earlier support from NSF. Factors that contribute to the mismatch include the development of platform specific test methods, test harnesses that require customizations for each test subject application and each operating system, and models that are incompatible with one another. The research will devise new techniques to control the testing environment, contextualizing factors that may affect experimental outcome and will allow for evolution and change of the artifacts over time. It is building a shared and extensible web infrastructure of benchmarks, tools, models and test artifacts that will enable scientific discovery in the state-of-the-art of user interactive event driven testing. COMET is a public resource that will be available to a broader community. Its impact will extend not only to the user interactive event testing research community, but also to others that work with user interfaces such as those who study usability, and to industry and the software testing community at large. The project work will involve both graduate and undergraduate students. Artifacts from the COMET website will be utilized for educational purposes in the classroom.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
A large body of research has focused on developing software testing techniques for applications that have user interactive (often graphical) interfaces. However, we lack ways to fairly compare results between researchers, since there has been no common set of benchmarks. In addition, controlling the outcomes of tests so that they produce the same result each time is non-trivial because this type of testing is dependent on complex test harnesses and is environmentally dependent (starting state of application, operating system, tools, and system resources). In this project we built a website called Community Event-based Testing or (COMET) (comet.unl.edu) to collate and share benchmarking artifacts for this purpose. This website allows registered users to download the artifacts in order to repeat our experiments and to compare their new approaches against ours. As users of COMET increase over time, we expect many novel testing techniques to be developed that can be fairly compared between different research groups. Throughout the life of this grant we have also studied fundamental research questions on topics such as test suite repeatability, test suite repair and test oracles.
Intellectual Merit: We have developed a website, and created a common format for user interactive testing artifacts and have included a template to allow members of the broader community to contribute .We have uploaded 29 benchmark programs (including several that were contributed by others in the community). The benchmarks include a variety of applications and include test cases, models, tools and coverage/fault matrices when available. We have made an effort to provide full environmental conditions so that our experiments are repeatable. Almost 200 users have registered on our site which gives unlimited access to the artifacts. To date, 14 papers have been published by authors who have utilized our artifacts. We have also made fundamental research contributions in test suite repair, in reducing test case flakiness and begun to study these issues in the mobile domain. We have published many of these papers in top software engineering venues.
Broader Impacts: During the course of this grant we have involved (and published papers with) both graduate and undergraduate students on this research, some of whom are from underrepresented groups. The research has led to multiple MS/PHD theses and several of the students have gone on to graduate studies at other Universities. Students working on this project have also had the opportunity to attend conferences and present their research to the broader community. We have had industrial partners on some of this work, and many of the registered users on COMET are from industry. PIs Cohen and Memon have given several invited talks both at conferences and at companies and presented a tutorial on controlling test flakiness at ICSE 2013. Finally, both PIs have incorporated material from COMET in their courses at their respective universities.
Last Modified: 11/14/2016
Modified by: Myra B Cohen
Please report errors in award information by writing to: awardsearch@nsf.gov.