Award Abstract # 1409872
SHF: Medium: Automated Graphical User Interface Testing with Learning

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: REGENTS OF THE UNIVERSITY OF CALIFORNIA, THE
Initial Amendment Date: July 29, 2014
Latest Amendment Date: May 15, 2018
Award Number: 1409872
Award Instrument: Standard Grant
Program Manager: Sol Greenspan
sgreensp@nsf.gov
 (703)292-7841
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 1, 2014
End Date: July 31, 2020 (Estimated)
Total Intended Award Amount: $850,000.00
Total Awarded Amount to Date: $850,000.00
Funds Obligated to Date: FY 2014 = $850,000.00
History of Investigator:
  • Koushik Sen (Principal Investigator)
    ksen@eecs.berkeley.edu
  • George Necula (Former Co-Principal Investigator)
Recipient Sponsored Research Office: University of California-Berkeley
1608 4TH ST STE 201
BERKELEY
CA  US  94710-1749
(510)643-3891
Sponsor Congressional District: 12
Primary Place of Performance: University of California-Berkeley
581 Soda Hall
Berkeley
CA  US  94720-1776
Primary Place of Performance
Congressional District:
12
Unique Entity Identifier (UEI): GS3YEVSS12N6
Parent UEI:
NSF Program(s): Software & Hardware Foundation
Primary Program Source: 01001415DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7924, 7944
Program Element Code(s): 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Smartphones and tablets with rich graphical user interfaces (GUIs) are
becoming increasingly popular. Hundreds of thousands of specialized
applications, called apps, are already available for these mobile
platforms, and the number of newly released apps continues to
increase. The complexity of these apps lies often in the user
interface, with data processing either minor, or delegated to a
backend component. A similar situation exists in applications using
the software-as-a-service architecture, where the client-side
component consists mostly of user interface code. Testing such
applications predominantly involves GUI testing. Existing automatic
techniques for testing these interfaces either require a priori models
of the interface and are thus hard to use, or operate blindly by
sending random user events to the application and are typically unable
to test the application in satisfactory depth.

This project investigates automatic GUI testing techniques that
systematically explore the state space of an application without
requiring an a priori defined model. One insight behind this project
is that the automatic construction of a model of the user interface
and the testing of the interface are tasks that can cooperate in a
mutually beneficial way. Furthermore, a guiding principle throughout
this research is to design algorithms that operate with abstractions
and heuristics that are simple enough to be understood by humans who
do not necessarily understand the internals of the tested app. Such
algorithms are easier to comprehend and to incorporate into a
wholistic test process that includes automated techniques, such as the
ones developed in this project, and manual testing and guidance. The
techniques developed in this project benefit directly programmers for
these apps, and indirectly the numerous users of mobile and web
applications.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 15)
Caroline Lemieux and Koushik Sen "FairFuzz: A Targeted Mutation Strategy for Increasing Greybox Fuzz Testing Coverage" 33rd IEEE/ACM International Conference on Automated Software Engineering , 2018
Caroline Lemieux and Rohan Padhye and Koushik Sen and Dawn Song "PerfFuzz: Automatically Generating Pathological Inputs" ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA'18) , 2018
Kevin Laeufer and Jack Koenig and Donggyu Kim and Jonathan Bachrach andKoushik Sen "RFUZZ: Coverage-Directed Fuzz Testing of RTL on FPGAs" International Conference On Computer Aided Design (ICCAD 2018) , 2018
Michael Pradel and Koushik Sen "DeepBugs: A Learning Approach to Name-based Bug Detection" ACM SIGPLAN conference on Systems, Programming, Languages and Applications: Software for Humanity (OOPSLA 2018) , 2018
Rafael Dutra and Jonathan Bachrach and Koushik Sen "GuidedSampler: Coverage-guided Sampling of SMT Solutions" Formal Methods in Computer Aided Design (FMCAD 2019) , 2019
Rafael Dutra and Jonathan Bachrach andKoushik Sen "SMTSampler: Efficient Stimulus Generation from Complex SMT Constraints" International Conference On Computer Aided Design (ICCAD 2018) , 2018
Rafael Dutra and Kevin Laeufer and Jonathan Bachrach and Koushik Sen "Efficient Sampling of SAT Solutions for Testing" 40th International Conference on Software Engineering (ICSE'18) , 2018
Rohan Padhye and Caroline Lemieux and Koushik Sen "JQF: Coverage-guided Property-based Testing in Java" Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2019) , 2019
Rohan Padhye and Caroline Lemieux and Koushik Sen and Mike Papadakis and Yves Le Traon "Semantic Fuzzing with Zest" Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2019) , 2019
Rohan Padhye and Caroline Lemieux and Koushik Sen and Mike Papadakis and Yves Le Traon "Validity fuzzing and parametric generators for effective random testing (Poster paper)" Proceedings of the 41st International Conference on Software Engineering: Companion Proceedings, (ICSE 2019) , 2019
Rohan Padhye and Koushik Sen "TRAVIOLI: A Dynamic Analysis for Detecting Data-Structure Traversals" 39th International Conference on Software Engineering (ICSE'17) , 2017
(Showing: 1 - 10 of 15)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Graphical User Interfaces (GUI) are increasingly popular, with significant growth in the mobile and web application area. This project developed automatic GUI testing techniques that systematically explore an application's state space without requiring an a priori defined model. 

 

In the project, we developed automated testing techniques SwiftHand and EventBreak, for graphical user interfaces (GUI) of Android apps using machine learning and program analysis. We also developed a technique called DETREDUCE to significantly reduce the size of test-suites generated by such automated testing techniques. In the past, I created a customizable dynamic analysis framework for JavaScript programs called Jalangi. Javascript is widely used to make websites GUIs interactive. Recently we have built several dynamic analyses on top of Jalangi:

  • Trace typing, a technique for automatically and quantitatively evaluating variations of a retrofitted type system on JavaScript programs.
  • Travioli, a technique for detecting and visualizing data-structure traversals, for manually generating performance regression tests, and for discovering performance bugs caused by redundant traversals.
  • EventRaceCommander, a technique for automated repair of event race errors in JavaScript web applications.
  • A platform-independent dynamic taint analysis technique for JavaScript. 

While working with GUI testing, we realized that we could apply feedback-directed fuzzing and machine learning to get better coverage. We developed ground-breaking automated test generation techniques that can find deep correctness and security bugs and pathological performance and resource usage bugs in real-world software. Such bugs were beyond the reach of existing automated testing techniques. A key insight behind this work is that if one could use machine learning and solicit high-level insights and guidance from developers, automated testing's effectiveness and efficiency can be dramatically improved. Our research contributions have made fuzz testing smarter and dramatically more effective for real-world software. Our papers have won awards: ACM SIGSOFT Distinguished Paper, ACM SIGSOFT Distinguished Artifact, and Best Tool Demo. Large tech firms have adopted our testing tools (e.g., Netflix and Samsung) and have been commercialized by security-oriented startups (e.g., FuzzIt and Pentagrid AG).


 


Last Modified: 11/02/2020
Modified by: Koushik Sen

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page