Award Abstract # 1815186
SHF: Small: Natural GUI-Based Testing of Mobile Apps via Mining Software Repositories

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: COLLEGE OF WILLIAM AND MARY
Initial Amendment Date: July 21, 2018
Latest Amendment Date: July 21, 2018
Award Number: 1815186
Award Instrument: Standard Grant
Program Manager: Sol Greenspan
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2018
End Date: August 31, 2022 (Estimated)
Total Intended Award Amount: $450,000.00
Total Awarded Amount to Date: $450,000.00
Funds Obligated to Date: FY 2018 = $450,000.00
History of Investigator:
  • Denys Poshyvanyk (Principal Investigator)
    dposhyvanyk@wm.edu
Recipient Sponsored Research Office: College of William and Mary
1314 S MOUNT VERNON AVE
WILLIAMSBURG
VA  US  23185
(757)221-3965
Sponsor Congressional District: 01
Primary Place of Performance: College of William and Mary
VA  US  23187-8795
Primary Place of Performance
Congressional District:
01
Unique Entity Identifier (UEI): EVWJPCY6AD97
Parent UEI: EVWJPCY6AD97
NSF Program(s): Software & Hardware Foundation
Primary Program Source: 01001819DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 7944
Program Element Code(s): 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Mobile devices have become an integral, ubiquitous part of modern society. The popularity of smartphones and tablets is largely due to the success of mobile software, colloquially referred to as "apps", that enable users to carry out a wide range of computing tasks in an intuitive and convenient manner. The burgeoning mobile app market is fueled by rapidly evolving performant hardware and software platforms that support increasingly complex functionality. In order for apps to achieve success in marketplaces such as Apple's App Store or Google Play, it is imperative that they function as intended and thus must be well tested. However, the unique aspects of mobile apps that make them popular, such as their touch-based interfaces, rapidly evolving platforms, and contextual features such as sensors, make them difficult to test effectively and efficiently. Additionally, as the marketplace for mobile apps matures, developers must ensure that their apps function well across a myriad of devices while addressing feedback from an increasingly large user base through app store reviews. These challenges illustrate that mobile developers require practical automated support to ensure that their apps are adequately tested. This research project aims to design, and thoroughly validate an automated testing approach for mobile apps that overcomes the challenges listed above. In turn, it is anticipated that the techniques enabled by this research will contribute to better-tested, higher quality mobile applications, benefiting both our society that increasingly depends on smartphone apps and the developers and teams that create them.

To solve these fundamental challenges, this project aims to develop an automated testing framework that combines novel statistical representations of mobile apps and information gleaned via mining software repositories techniques to efficiently generate practical, effective test scenarios. More specifically, a novel testing framework, coined as T+, will be developed. T+ is rooted in a probabilistic model-based representation of mobile apps. This model will enable a transformative automated approach for generating feasible test cases that are decoupled from low level events, can be executed on different devices, and support multiple testing goals and adequacy criteria. Additionally, this research work will define and develop monitoring mechanisms for identifying change- and fault- prone APIs in underlying platform and third-party libraries, as well as informative reviews. Incorporation of this information into the statistical model of T+ will allow for the generation and prioritization of test cases covering these APIs and reviews. Broader impacts of this work will reside in (1) improving the state of the practice in testing mobile apps, where difficulties are faced in ensuring that apps are adequately tested with respect to changing platforms, APIs, reviews, and numerous devices; (2) demonstrating improved testing practices with industry partners, which will be documented as best practices for other development organizations and test centers to adopt; (3) developing educational course content and piloting it in the classroom as part of this research project; and (4) actively involving underrepresented categories of students in this research program.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Moran, Kevin and Bernal-Cardenas, Carlos and Curcio, Michael and Bonett, Richard and Poshyvanyk, Denys "Machine Learning-Based Prototyping of Graphical User Interfaces for Mobile Apps" IEEE Transactions on Software Engineering , v.46 , 2020 10.1109/TSE.2018.2844788 Citation Details
Tufano, Michele and Watson, Cody and Bavota, Gabriele and Penta, Massimiliano Di and White, Martin and Poshyvanyk, Denys "An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation" ACM Transactions on Software Engineering and Methodology , v.28 , 2019 10.1145/3340544 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The mobile application marketplace continues to grow at an unprecedented rate, spurring wide developer interest in creating “apps”, which include complex functionality, in a world that is increasingly trending toward mobile computing. Unfortunately, mobile developers face several challenges, such as rapid platform/library evolution, API instability, platform fragmentation, and the impact of user reviews and ratings on the success of their apps. These challenges highlight the need for automated tools to support the development, testing, and maintenance of mobile apps.  While automated tools are available, mobile app testing, in practice, is still performed mostly manually.  This is mostly due to the following factors that inhibit these approaches from being effective, efficient, and practical: (i) current automated input generation tools typically do not consider natural event sequences representing the functional use cases of an app that are easy for developers to comprehend; (ii) automatically generated test scenarios typically cannot be used across diverse device configurations or under varying contextual settings (e.g., network on/off); (iii) automated testing typically does not account for nor test features related to rapidly evolving mobile platforms and APIs; and (iv) testing approaches fail to consider feedback from user reviews.

To address these issues, the project investigated and produced (1) a novel approach for automatically extracting GUI, usage, domain, and contextual models for Android apps, (2) novel statistical representations that holistically combine the models and aim at generating representative and (un)natural test cases; (3) an innovative approach for monitoring change- and fault- prone APIs in the underlying Android platform and third-party libraries as well as user reviews and (4) a suite of new tools that have been evaluated and made publicly available. Some of the broader impacts from this project have included (1) improving the state of the practice in testing mobile apps, where difficulties are faced in ensuring that apps are adequately tested with respect to changing platforms, APIs, reviews, and numerous devices; (2) demonstrating improved testing practices with our industry partners, which were document as best practices for other development organizations and test centers to adopt; (3) developing educational course content and piloting it in our courses as part of this research project and (4) actively involving underrepresented categories of students in this research program..

The resulting work has been published in several high-quality software engineering conferences and journals (some gaining best/distinguished paper recognition).  A number of undergraduate and graduate students, including a minority doctoral student and a minority undergraduate student, were trained and became contributing members on this project.   Several of these students co-authored and presented papers at international conferences.  Multiple graduate-level theses were derived from this project. The students graduating from this program have secured full-time employment in academia and software industry. The gained scientific knowledge was integrated in multiple undergraduate and graduate classes at the host institutions, which broadens STEM education.  A number of open-source software tools and datasets were developed and made publicly available.  The data repositories resulting from this project are made accessible to the scientific community and general public through the PI’s web site. The project enhanced and strengthened a long-term professional collaboration not only between the PI and his students but also multiple collaborators involved.  The computing infrastructure established during the course of the project permits the sustainability of its resources.


Last Modified: 12/21/2022
Modified by: Denys Poshyvanyk

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page