
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | July 29, 2014 |
Latest Amendment Date: | May 15, 2018 |
Award Number: | 1409872 |
Award Instrument: | Standard Grant |
Program Manager: |
Sol Greenspan
sgreensp@nsf.gov (703)292-7841 CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | August 1, 2014 |
End Date: | July 31, 2020 (Estimated) |
Total Intended Award Amount: | $850,000.00 |
Total Awarded Amount to Date: | $850,000.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1608 4TH ST STE 201 BERKELEY CA US 94710-1749 (510)643-3891 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
581 Soda Hall Berkeley CA US 94720-1776 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Software & Hardware Foundation |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Smartphones and tablets with rich graphical user interfaces (GUIs) are
becoming increasingly popular. Hundreds of thousands of specialized
applications, called apps, are already available for these mobile
platforms, and the number of newly released apps continues to
increase. The complexity of these apps lies often in the user
interface, with data processing either minor, or delegated to a
backend component. A similar situation exists in applications using
the software-as-a-service architecture, where the client-side
component consists mostly of user interface code. Testing such
applications predominantly involves GUI testing. Existing automatic
techniques for testing these interfaces either require a priori models
of the interface and are thus hard to use, or operate blindly by
sending random user events to the application and are typically unable
to test the application in satisfactory depth.
This project investigates automatic GUI testing techniques that
systematically explore the state space of an application without
requiring an a priori defined model. One insight behind this project
is that the automatic construction of a model of the user interface
and the testing of the interface are tasks that can cooperate in a
mutually beneficial way. Furthermore, a guiding principle throughout
this research is to design algorithms that operate with abstractions
and heuristics that are simple enough to be understood by humans who
do not necessarily understand the internals of the tested app. Such
algorithms are easier to comprehend and to incorporate into a
wholistic test process that includes automated techniques, such as the
ones developed in this project, and manual testing and guidance. The
techniques developed in this project benefit directly programmers for
these apps, and indirectly the numerous users of mobile and web
applications.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Graphical User Interfaces (GUI) are increasingly popular, with significant growth in the mobile and web application area. This project developed automatic GUI testing techniques that systematically explore an application's state space without requiring an a priori defined model.
In the project, we developed automated testing techniques SwiftHand and EventBreak, for graphical user interfaces (GUI) of Android apps using machine learning and program analysis. We also developed a technique called DETREDUCE to significantly reduce the size of test-suites generated by such automated testing techniques. In the past, I created a customizable dynamic analysis framework for JavaScript programs called Jalangi. Javascript is widely used to make websites GUIs interactive. Recently we have built several dynamic analyses on top of Jalangi:
- Trace typing, a technique for automatically and quantitatively evaluating variations of a retrofitted type system on JavaScript programs.
- Travioli, a technique for detecting and visualizing data-structure traversals, for manually generating performance regression tests, and for discovering performance bugs caused by redundant traversals.
- EventRaceCommander, a technique for automated repair of event race errors in JavaScript web applications.
- A platform-independent dynamic taint analysis technique for JavaScript.
While working with GUI testing, we realized that we could apply feedback-directed fuzzing and machine learning to get better coverage. We developed ground-breaking automated test generation techniques that can find deep correctness and security bugs and pathological performance and resource usage bugs in real-world software. Such bugs were beyond the reach of existing automated testing techniques. A key insight behind this work is that if one could use machine learning and solicit high-level insights and guidance from developers, automated testing's effectiveness and efficiency can be dramatically improved. Our research contributions have made fuzz testing smarter and dramatically more effective for real-world software. Our papers have won awards: ACM SIGSOFT Distinguished Paper, ACM SIGSOFT Distinguished Artifact, and Best Tool Demo. Large tech firms have adopted our testing tools (e.g., Netflix and Samsung) and have been commercialized by security-oriented startups (e.g., FuzzIt and Pentagrid AG).
Last Modified: 11/02/2020
Modified by: Koushik Sen
Please report errors in award information by writing to: awardsearch@nsf.gov.