Award Abstract # 1434596
CAREER: Cooperative Developer Testing with Test Intentions

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: UNIVERSITY OF ILLINOIS
Initial Amendment Date: March 4, 2014
Latest Amendment Date: March 4, 2014
Award Number: 1434596
Award Instrument: Continuing Grant
Program Manager: Sol Greenspan
sgreensp@nsf.gov
 (703)292-7841
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: July 1, 2013
End Date: July 31, 2016 (Estimated)
Total Intended Award Amount: $364,274.00
Total Awarded Amount to Date: $364,274.00
Funds Obligated to Date: FY 2010 = $93,274.00
FY 2011 = $101,000.00

FY 2012 = $85,000.00

FY 2013 = $85,000.00
History of Investigator:
  • Tao Xie (Principal Investigator)
    taoxie@illinois.edu
Recipient Sponsored Research Office: University of Illinois at Urbana-Champaign
506 S WRIGHT ST
URBANA
IL  US  61801-3620
(217)333-2187
Sponsor Congressional District: 13
Primary Place of Performance: University of Illinois at Urbana-Champaign
SUITE A 1901 SOUTH FIRST ST.
CHAMPAIGN
IL  US  61820-7473
Primary Place of Performance
Congressional District:
13
Unique Entity Identifier (UEI): Y8CWNJRCNN91
Parent UEI: V2PHZ2CSCH63
NSF Program(s): Software & Hardware Foundation,
SOFTWARE ENG & FORMAL METHODS,
Computing in the Cloud
Primary Program Source: 01001011DB NSF RESEARCH & RELATED ACTIVIT
01001112DB NSF RESEARCH & RELATED ACTIVIT

01001213DB NSF RESEARCH & RELATED ACTIVIT

01001314DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 9218, HPCC
Program Element Code(s): 779800, 794400, 801000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).

Developer testing has been widely recognized as an important and valuable means of improving software reliability, partly due to its capabilities of exposing faults early in the software development life cycle. However, manual developer testing is often tedious and insufficient. Testing tools can be used to enable economical use of resources by reducing manual effort. To maximize the value of developer testing, effective and efficient support for cooperation between developers and tools is greatly needed and yet lacking in state-of-the-art research and practice.

This research aims to create a systematic framework for cooperative developer testing that provides practical techniques and tools, with an integrated research and education plan. In particular, the research addresses fundamental research questions around specification of test intentions by developers to communicate their testing goals or guidance to tools, satisfaction of test intentions by tools, and explanation of intention satisfaction by tools. Test-intention satisfaction and its explanation assist developers in accomplishing not only their testing tasks but also debugging tasks. The framework also helps infer likely test intentions to reduce manual effort in specification of test intentions. Among the broader impacts of the project includes improvement of software reliability and collaboration with industry to transfer technology.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Benjamin Andow, Adwait Nadkarni, Blake Bassett, William Enck, and Tao Xie "A Study of Grayware on Google Play" Proceedings of Workshop on Mobile Security Technologies (MoST 2016) , 2016 10.1109/SPW.2016.40
Dan Hao, Lu Zhang, Lei Zang, Yanbo Wang, Xingxia Wu, and Tao Xie "To Be Optimal or Not in Test-Case Prioritization" IEEE Transactions on Software Engineering , v.42 , 2016 , p.490 10.1109/TSE.2015.2496939
Huoran Li, Xuanzhe Liu, Tao Xie, Kaigui Bian, Xuan Lu, Felix Xiaozhu Lin, Qiaozhu Mei, and Feng Feng "Characterizing Smartphone Usage Patterns from Millions of Android Users" Proceedings of the 2015 Internet Measurement Conference (IMC 2015) , 2015 , p.459 10.1145/2815675.2815686
Kai Pan, Xintao Wu, and Tao Xie. "Program-Input Generation for Testing Database Applications Using Existing Database States" Automated Software Engineering Journal , v.22 , 2015 , p.439 10.1007/s10515-014-0158-y
Sihan Li, Xusheng Xiao, Blake Bassett, Tao Xie and Nikolai Tillmann "Measuring Code Behavioral Similarity for Programming and Software Engineering Education" Proceedings of the 38th International Conference on Software Engineering (ICSE 2016), SEET , 2016 , p.501 10.1145/2889160.2889204
Tao Xie, Lu Zhang, Xusheng Xiao, Yingfei Xiong, and Dan Hao "Cooperative Software Testing and Analysis: Advances and Challenges" Journal of Computer Science and Technology , v.29 , 2014 , p.713 10.1007/s11390-014-1461-6
Tao Xie, Lu Zhang, Xusheng Xiao, Yingfei Xiong, and Dan Hao. "Cooperative Software Testing and Analysis: Advances and Challenges" Journal of Computer Science and Technology , v.29 , 2014 , p.713 10.?1007/?s11390-014-1461-6
Xuan Lu, Xuanzhe Liu, Huoran Li, Tao Xie, Qiaozhu Mei, Dan Hao, Gang Huang, and Feng Feng "Mining Usage Data from Large-Scale Android Users: Challenges and Opportunities" Proceedings of the IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBISoft 2016), Mobile Applications , 2016 , p.301 10.1145/2897073.2897721
Xuan Lu, Xuanzhe Liu, Huoran Li, Tao Xie, Qiaozhu Mei, Dan Hao, Gang Huang, and Feng Feng "PRADA: Prioritizing Android Devices for Apps by Mining Large-Scale Usage Data" Proceedings of the 38th International Conference on Software Engineering (ICSE 2016) , 2016 , p.3 10.1145/2884781.2884828
Yuan Yao, Hanghang Tong, Tao Xie, Leman Akoglu, Feng Xu, and Jian Lu. "Detecting High-quality Posts in Community Question Answering Sites" Information Sciences , v.302 , 2015 , p.70 10.1016/j.ins.2014.12.038

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The main goal of the project was to develop a systematic framework for cooperative developer testing that provides practical techniques and tools, with an integrated research and education plan. The project explores synergistic cooperation between developers and testing tools to achieve higher software reliability with lower cost.

The outcomes of this project were a set of new techniques and tools for cooperative developer testing. The project advanced understanding of fundamental issues related to cooperation between developers and testing tools to achieve higher software reliability with lower cost. The project explored new approaches that reduce developers’ cooperation cost by improving tools’ automation capability and by reducing tools’ false warnings for seeking cooperation from developers.

More specifically, we have developed techniques and tools that precisely identify and report problems that prevent the tools from achieving high structural coverage in order to reduce developers’ guidance efforts. We have developed techniques and tools that generate proper method sequences to construct desired objects as method parameters in object-oriented unit test generation. We have developed a methodology that retrofits existing conventional unit tests to parameterized unit tests in order to improve fault-detection capability and code coverage. We have developed techniques and tools that generate various cloud states for achieving effective testing of cloud applications. We have developed techniques and tools that predict loops of workload-dependent performance bottlenecks under large workloads to reduce developers’ inspection effort for performance testing. We have conducted characteristic studies to address loop problems for dynamic symbolic execution to improve tools’ testing effectiveness.

We have collaborated with Microsoft Research on improving an automatic test generation tool called Pex (which was shipped as IntelliTest in the Microsoft Visual Studio 2015 Enterprise Edition). We have collaborated with Microsoft Research on Code Hunt, a serious gaming platform for coding contests and practicing programming skills. Since 2014, Code Hunt has been used by over 350,000 players as of August 2016.

We have disseminated our research results through publications in top venues such as highly-competitive conferences and journals, along with public tool and evaluation-artifact releases and research exchanges. We have successfully trained next-generation researchers via student training and mentoring, and trained next-generation software engineers through undergraduate-level and graduate-level education.

 


Last Modified: 01/29/2017
Modified by: Tao Xie

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page