Award Abstract # 1656877
CRII: SHF: Toward Sustainable Software for Science - Implementing and Assessing Systematic Testing Approaches for Scientific Software

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: MONTANA STATE UNIVERSITY
Initial Amendment Date: February 21, 2017
Latest Amendment Date: March 20, 2017
Award Number: 1656877
Award Instrument: Standard Grant
Program Manager: Sol Greenspan
sgreensp@nsf.gov
 (703)292-7841
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: March 1, 2017
End Date: February 28, 2021 (Estimated)
Total Intended Award Amount: $152,110.00
Total Awarded Amount to Date: $157,006.00
Funds Obligated to Date: FY 2017 = $157,006.00
History of Investigator:
  • Upulee Kanewala (Principal Investigator)
    upulee.kanewala@unf.edu
Recipient Sponsored Research Office: Montana State University
216 MONTANA HALL
BOZEMAN
MT  US  59717
(406)994-2381
Sponsor Congressional District: 01
Primary Place of Performance: Montana State University
Bozeman
MT  US  59717-2470
Primary Place of Performance
Congressional District:
01
Unique Entity Identifier (UEI): EJ3UF7TK8RT5
Parent UEI:
NSF Program(s): CRII CISE Research Initiation,
CI REUSE,
EPSCoR Co-Funding
Primary Program Source: 01001718DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7798, 7944, 8228, 9150
Program Element Code(s): 026Y00, 689200, 915000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Scientific software is widely used in science and engineering. In
addition, results obtained from scientific software are used as evidence in
research publications. Despite the critical usage of such software, many
studies have pointed out a lack of systematic testing of scientific software.
As a result, subtle program errors can remain undetected. There are numerous
reports of subtle faults in scientific software causing losses of billions of
dollars and the withdrawal of scientific publications. This research aims to develop
automated techniques for test oracle creation, test case selection, and develop
methods for test oracle prioritization targeting scientific software. The intellectual
merits of this research are the following: (1) It advances the understanding of
the scientific software development process and investigates methods to
incorporate systematic software testing activities into scientific software
development without interfering with the scientific inquiry, (2) It forges new
approaches to develop automated test oracles for programs that produce complex
outputs and for programs that produce outputs that are previously unknown, (3)
It develops new metrics to measure the effectiveness of partial test oracles
and uses them for test oracle prioritization, (4) It extends the boundaries of
current test case selection to effectively work with partial or approximate
test oracles. The project's broader significance and importance are (1)
produces a publicly available, easy to use testing tool that can be incorporated
into the scientific software development culture such that the testing
activities will not interfere with ?doing science,? (2) recruits Native Americans
and women into computer science research, (3) develops a new higher level
undergraduate course titled ?Software development methods for Scientists?
targeting senior level undergraduate students in non-CS disciplines.

This project develops METtester: an automated testing framework that can effectively
test scientific software. This testing framework analyzes the source code of
the program under test and utilizes machine learning techniques in order to
identify suitable test oracles called metamorphic relations (MRs). Then, it automatically
generates effective test cases to conduct automated testing based on the
identified MRs using a mutation based approach. After that, it creates a
prioritized order of MRs to be used with testing in order to identify faults as
early as possible during the testing process. Finally, METtester conducts
testing on the program under test using the prioritized order of MRs with the generated
test cases.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Kanewala, Upulee and Chen, Tsong Yueh "Metamorphic Testing: A Simple yet Effective Approach for Testing Scientific Software" Computing in Science & Engineering , 2018 10.1109/MCSE.2018.2875368 Citation Details
Kanewala, Upulee and Chen, Tsong Yueh "Metamorphic Testing: A Simple Yet Effective Approach for Testing Scientific Software" Computing in science engineering , v.21 , 2019 Citation Details
Hardin, Bonnie and Kanewala, Upulee "Using semi-supervised learning for predicting metamorphic relations" The 3rd International Workshop on Metamorphic Testing , 2018 10.1145/3193977.3193985 Citation Details
Rahman, Karishma and Kahanda, Indika and Kanewala, Upulee "MRpredT: Using Text Mining for Metamorphic Relation Prediction" Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops , 2020 https://doi.org/10.1145/3387940.3392250 Citation Details
Rahman, Karishma and Kanewala, Upulee "Predicting metamorphic relations for matrix calculation programs" the 3rd International Workshop on Metamorphic Testing , 2018 10.1145/3193977.3193983 Citation Details
Saha, Prashanta and Kanewala, Upulee "Fault Detection Effectiveness of Metamorphic Relations Developed for Testing Supervised Classifiers" 2019 IEEE International Conference On Artificial Intelligence Testing (AITest) , 2019 10.1109/AITest.2019.00019 Citation Details
Saha, Prashanta and Kanewala, Upulee "Fault detection effectiveness of source test case generation strategies for metamorphic testing" 3rd International Workshop on Metamorphic Testing , 2018 10.1145/3193977.3193982 Citation Details
Saha, Prashanta and Kanewala, Upulee "Improving The Effectiveness of Automatically Generated Test Suites Using Metamorphic Testing" Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops , 2020 https://doi.org/10.1145/3387940.3392253 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Scientific software is widely used in science and engineering. Such software plays a vital role in critical decision-making in fields such as the nuclear industry, medicine, and the military. In addition, results obtained from scientific software are used as evidence in research publications. Despite the critical usage of such software, many studies have pointed out a lack of systematic testing of scientific software. As a result, subtle program errors can remain undetected. These subtle errors can produce seemingly correct outputs without causing the program to crash. There are numerous reports of subtle faults in scientific software causing losses of billions of dollars and the withdrawal of scientific publications. One of the greatest challenges faced when testing scientific software is the oracle problem. Systematic automated testing requires a test oracle to check whether the outputs produced by test cases are correct according to the expected behavior of the program. But most scientific software is written to find answers that are previously unknown. Therefore, test oracles do not exist, or it is practically difficult to implement them for these programs.

Consequently, testing techniques that depend on a perfect test oracle cannot be used for testing most scientific software. Up to now, software testing research has not provided effective solutions for the challenges presented by scientific software. This project aimed to address these challenges by developing automated techniques for test oracle creation, test case selection, and developing methods for test oracle prioritization. Further, these developed techniques are implemented in a publicly available, easy-to-use testing tool so that they can be incorporated into the scientific software development culture such that the testing activities will not interfere with “doing science.”

Our results show that systematic software testing, specifically metamorphic testing combined with the novel approaches developed in this project, can be incorporated into scientific software development. Specifically, our work shows that

  • Machine learning methods such as supervised learning and semi-supervised learning combined with text mining can be used to predict metamorphic relations. These predicted metamorphic relations allow determining test cases that have passed or failed when testing scientific software. Thus, these predicted metamorphic relations allow conducting automated testing on programs that produce complex outputs and programs that produce previously unknown outputs.
  • The two systematic source test case generation approaches that we developed can increase the fault-detection effectiveness of metamorphic testing compared to the state-of-the-art approach, which is random source test case generation.
  • Prioritizing metamorphic relations using fault detection information on previous versions of the software will save resources during regression testing of scientific software.

 

In addition, the developed techniques mentioned above are implemented in METtester, a publicly available metamorphic testing tool, so that they can be utilized during scientific software development activities.

 

 

 


Last Modified: 06/21/2021
Modified by: Upulee Kanewala

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page