Skip to feedback

Award Abstract # 1918140
FMitF: Collaborative Research: Track I: Preventing Human Errors in Cyber-human Systems with Formal Approaches to Human Reliability Rating and Model Repair

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: CARNEGIE MELLON UNIVERSITY
Initial Amendment Date: June 3, 2019
Latest Amendment Date: May 17, 2022
Award Number: 1918140
Award Instrument: Standard Grant
Program Manager: Pavithra Prabhakar
pprabhak@nsf.gov
 (703)292-2585
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 15, 2019
End Date: July 31, 2023 (Estimated)
Total Intended Award Amount: $375,000.00
Total Awarded Amount to Date: $399,000.00
Funds Obligated to Date: FY 2019 = $375,000.00
FY 2020 = $8,000.00

FY 2021 = $8,000.00

FY 2022 = $8,000.00
History of Investigator:
  • Eunsuk Kang (Principal Investigator)
    eskang@cmu.edu
Recipient Sponsored Research Office: Carnegie-Mellon University
5000 FORBES AVE
PITTSBURGH
PA  US  15213-3815
(412)268-8746
Sponsor Congressional District: 12
Primary Place of Performance: CARNEGIE MELLON UNIVERSITY
5000 Forbes Ave
Pittsburgh
PA  US  15213-3890
Primary Place of Performance
Congressional District:
12
Unique Entity Identifier (UEI): U3NKNFLNQ613
Parent UEI: U3NKNFLNQ613
NSF Program(s): FMitF: Formal Methods in the F,
Software & Hardware Foundation
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
01001920DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT

01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 062Z, 071Z, 8206, 9251
Program Element Code(s): 094Y00, 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Human error is a major factor in failures across safety-critical domains. Such failures are very complex, with human errors often arising as a result of unexpected interactions between system automation and human behavior. Thus, researchers have investigated how formal methods tools and techniques, which have been developed to mathematically prove properties about complex computer systems, can be adapted to human-automation interaction (HAI) problems. These techniques are powerful and capable of discovering unexpected, critical human errors and system failures. However, existing techniques do not provide a means for fixing discovered human errors. Further, interface changes both introduce new unforeseen errors and risk negative transfer effects, where changes that conflict with previously learned behaviors can also cause problems. This project will investigate a novel approach to HAI evaluation and repair that will help designers and analysts efficiently eliminate many kinds of potential interaction errors while minimizing the risk of introducing additional human errors. The developed methods will be validated in design cases of real safety-critical systems including an industrial furnace, nuclear power plant procedures, a radiation therapy machine, and pharmacy medication dispensing processes. The knowledge and tools produced in this research will be made available to researchers and designers and have potential applications to a wide range of many safety-critical systems. This, in turn, will help avoid system disasters, prevent injuries, save lives, and protect critical resources across society.

The project is divided into three main thrusts. First, the team will develop a theoretically grounded method for scoring the likelihood that humans will behave erroneously for a given HAI design through a novel synthesis of formal methods, erroneous human behavior models, negative transfer theory, and human reliability analyses. Second, it will introduce a new theory of formal model repair in interactive systems that will underlie the development of methods for removing problematic HAI errors by adapting both human-machine interfaces and the workflow of the associated tasks. Third, the scoring and model repair methods will be combined to allow automated model repair to find design interventions that will reduce the likelihood of changes causing problematic human errors, using a database of common error patterns and solutions to be developed through the project. Across all three of these thrusts, the team will use human subject experiments, testing, and formal proofs to validate that the advances achieve their hypothesized capabilities. The work will lead to improved methods for evaluating human reliability aspects of interfaces, widen the application of formal methods to new contexts, and provide resources for researchers, designers, and engineers to improve the reliability of cyber-human systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Bagheri, Hamid and Kang, Eunsuk and Mansoor, Niloofar "Synthesis of Assurance Cases for Software Certification" Proceedings of the International Conference on Software Engineering , 2020 10.1145/3377816.3381728 Citation Details
Bolton, Matthew L. and Zheng, Xi and Kang, Eunsuk "A formal method for including the probability of erroneous human task behavior in system analyses" Reliability Engineering & System Safety , v.213 , 2021 https://doi.org/10.1016/j.ress.2021.107764 Citation Details
Zhang, Changjian and Saluja, Tarang and Meira-Góes, Rômulo and Bolton, Matthew and Garlan, David and Kang, Eunsuk "Robustification of Behavioral Designs against Environmental Deviations" 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE) , 2023 https://doi.org/10.1109/ICSE48619.2023.00046 Citation Details
Zhang, Changjian and Wagner, Ryan and Orvalho, Pedro and Garlan, David and Manquinho, Vasco and Martins, Ruben and Kang, Eunsuk "AlloyMax: Bringing Maximum Satisfaction to Relational Specifications" ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) , 2021 https://doi.org/10.1145/3468264.3468587 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Poor interface design has been attributed as a major factor behind numerous fatal accidents in safety-critical systems, including medical devices, automobiles, and aviation systems. One of the recurring problems is that human operators inadvertently make mistakes from time to time (e.g., omit or repeat a critical action in a task), but many human-machine interfaces (HMIs) are not designed to recognize or handle such errors. As a result, when a safety failure occurs, it is often the operators who receive the blame, even when a more reliable interface could have prevented a human error from resulting in the failure.

To address these challenges, the project aimed to develop (1) a rigorous understanding of human errors and its impact on the safety of software-intensive systems using formal and probabilistic models and (2) develop tools and techniques for systematically analyzing and repairing HMIs to improve their reliability. In particular, this project has resulted in:

- Novel formal theories of robustness as a measure for the ability of a system to tolerate erroneous behaviors by a human operator,

- Techniques for automatically analyzing HMI designs to measure their robustness, and for repairing HMIs to improve their robustness,

- Methods for probabilistically reasoning about the impact of operator errors on the safety of a system

- ??

The theories and techniques developed in this project were demonstrated on a wide range of realistic case studies, including HMIs for a radiation therapy system, an infusion pump, an automated teller machine, an electronic voting system, and a public fare collection system.

By providing tools and techniques for designing HMIs for operator errors, the outcome of this project will contribute to safer, more dependable systems that closely interact with users.

 


Last Modified: 11/28/2023
Modified by: Eunsuk Kang

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page