Award Abstract # 1320783
SHF: Small: BugX: In-house Debugging of Field Failures to Improve Software Quality

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: GEORGIA TECH RESEARCH CORP
Initial Amendment Date: June 26, 2013
Latest Amendment Date: June 26, 2013
Award Number: 1320783
Award Instrument: Standard Grant
Program Manager: Sol Greenspan
sgreensp@nsf.gov
 (703)292-7841
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2013
End Date: August 31, 2017 (Estimated)
Total Intended Award Amount: $434,999.00
Total Awarded Amount to Date: $434,999.00
Funds Obligated to Date: FY 2013 = $434,999.00
History of Investigator:
  • Alessandro Orso (Principal Investigator)
    orso@cc.gatech.edu
Recipient Sponsored Research Office: Georgia Tech Research Corporation
926 DALNEY ST NW
ATLANTA
GA  US  30318-6395
(404)894-4819
Sponsor Congressional District: 05
Primary Place of Performance: Georgia Institute of Technology
225 North Ave NW
Atlanta
GA  US  30332-0002
Primary Place of Performance
Congressional District:
05
Unique Entity Identifier (UEI): EMW9FC8J3HN4
Parent UEI: EMW9FC8J3HN4
NSF Program(s): SOFTWARE ENG & FORMAL METHODS
Primary Program Source: 01001314DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 7944
Program Element Code(s): 794400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

A recent survey conducted among developers of the Apache, Eclipse, and
Mozilla projects showed that the ability to recreate field
failures--failures of the software that occur after deployment, on
user machines--is considered of fundamental importance when
investigating bug reports. Unfortunately, the information typically
contained in a bug report, such as memory dumps or call stacks, is
usually insufficient for recreating the problem. Even more advanced
approaches for gathering field data and help in-house debugging tend
to provide too little information to developers and to be therefore
ineffective.

The overall goal of this project is to improve the state of the art by
allowing, supporting, and partially automating, actual in-house
debugging of field failures. Specifically, this research will develop
novel techniques and tools that let developers reproduce, analyze, and
understand, in-house, failures observed in the field. Given a field
failure, the developed techniques will (1) collect a suitable set of
data about the failure on the user machine, (2) generate one or more
inputs that can be executed against the failing application and result
in a failure analogous to the one observed, and (3) provide hints on
the root causes of the failure and possible fixes for these causes. To
achieve this goal, the research will combine static and dynamic
program analysis techniques and leverage and extend techniques for
testing deployed software, input generation and anonymization, and
software debugging. If successful, this research will provide
unprecedented advantages to developers by allowing them to debug field
failures in the same way in which they debug in-house ones, which will
improve software quality and benefit all segments of society that
depend on software. Furthermore, the project will develop and make
available to the broader scientific community educational materials
that incorporate research findings, tools that implement the
techniques developed within the project, and samples of the software
benchmarks used in empirical evaluations. The availability of
curriculum materials, tools, infrastructure, and benchmarks will
advance knowledge, enable additional research in the area, and
ultimately further benefit society.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Daniele Zuddas, Wei Jin, Fabrizio Pastore, Leonardo Mariani, and Alessandro Orso "MIMIC: Locating and Understanding Bugs by Analyzing Mimicked Executions" Proceedings of the 29th IEEE/ACM International Conference on Automated Software Engineering (ASE 2014) , 2014 , p.815 10.1145/2642937.2643014
Qianqian Wang, Chris Parnin, and Alessandro Orso "Evaluating the Usefulness of IR-Based Fault Localization Techniques" Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2015) , 2015 , p.1 10.1145/2771783.2771797
Shauvik Roy Choudhary, Alessandra Gorla, and Alessandro Orso "Automated Test Input Generation for Android: Are We There Yet?" 29th IEEE/ACM International Conference on Automated Software Engineering (ASE 2015) , 2015 , p.429 10.1109/ASE.2015.89
Wei Jin and Alessandro Orso "Automated Support for Reproducing and Debugging Field Failures" ACM Transactions on Software Engineering and Methodology (TOSEM) , v.24 , 2015 , p.1 10.1145/2774218
Wei Jin and Alessandro Orso "Improving Efficiency and Accuracy of Formula-based Debugging" 12th Haifa Verification Conference (HVC 2016) , 2016 , p.99 10.1007/978-3-319-49052-6_7
Xiangyu Li, Marcelo d'Amorim, and Alessandro Orso "Iterative User-Driven Fault Localization" 12th Haifa Verification Conference (HVC 2016) , 2016 , p.82 10.1007/978-3-319-49052-6_6

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Software debugging is a human-intensive activity responsible for much of the cost of software development and maintenance. Existing approaches for automated debugging can help lower this cost but have limitations that hinder their effectiveness and applicability. This project aimed to overcome these limitations by developing a family of debugging techniques that (1) target realistic debugging scenarios, in which faults can involve multiple statements and manifest themselves only in specific contexts, (2) apply advanced static and dynamic analysis techniques to automatically reduce the amount of both statements and inputs that developers must examine when investigating a failure, (3) leverage information collected from the field to increase the relevance and effectiveness of the debugging process. To accomplish the goal of the project, we developed several new testing and debugging techniques, which are summarized in the rest of this report.

Techniques for reproducing, debugging, and repairing software failures that occur on user machines, after the software has been deployed. A survey conducted among developers of the Apache, Eclipse, and Mozilla projects revealed that most developers consider information on how to reproduce failures to be the most valuable and difficult to obtain piece of information in a bug report. To address this problem, we first developed techniques that, given an observed field failure, can provide developers with test inputs that result in a failure analogous to the failure that the user experienced. We then extended these techniques by adding to them support for automated debugging and bug understanding, so as to further help developers identify the root causes of field failures. Finally, we went beyond locating and understanding failure causes by developing automated repair techniques that can provide developers with hints on how a given bug may be repaired.

Techniques for helping developers debug more efficiently and effectively. Countless approaches have been proposed over the years to help developers decrease the cost and complexity of software debugging. Although some of these techniques have been shown to be useful, they tend to make strong, often unrealistic assumptions on how developers behave when debugging. In the context of this project, we addressed this issue by first studying the limitations of existing approaches and then developing techniques that follow the way in which debugging is typically performed, while automating significant parts of it. In particular, given a failure, our techniques use static and dynamic analyses to (1) formulate hypotheses on the possible causes of such failure and (2) generate intuitive queries for the developers based on these hypotheses. The answers to these queries can then help refuse or refine these hypotheses until the cause of the failure being investigated is found.

Techniques for supporting testing and debugging of mobile apps. In addition to developing techniques that can improve software debugging in general, in this project we also defined techniques that specifically target testing and debugging of mobile (Android) apps. Specifically, we developed techniques that allows for recording, encoding, and running platform independent test cases, so as to facilitate regression testing and debugging. We also developed techniques based on differential analysis that can automatically identify inconsistencies in the way an app behaves on different devices and report to developers these inconsistencies and their causes.

Broader impact of this research: In addition to disseminating the results of this research through publications, public presentations, and integration into the curriculum, we made freely available to researchers and practitioners tools, data, and experiment infrastructure developed within the project, which will help further dissemination and enable future research. More generally, by advancing the state of the art in the areas of software testing and debugging, this research helped and will help developers build more reliable software systems, ultimately increasing the overall quality of our software infrastructure and providing benefits to all segments of society that depend on software.

 


Last Modified: 05/10/2018
Modified by: Alessandro Orso

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page