
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | June 28, 2018 |
Latest Amendment Date: | July 17, 2023 |
Award Number: | 1817249 |
Award Instrument: | Standard Grant |
Program Manager: |
Sara Kiesler
skiesler@nsf.gov (703)292-8643 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | September 1, 2018 |
End Date: | May 31, 2024 (Estimated) |
Total Intended Award Amount: | $499,671.00 |
Total Awarded Amount to Date: | $558,018.00 |
Funds Obligated to Date: |
FY 2021 = $58,347.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
2150 SHATTUCK AVE BERKELEY CA US 94704-1345 (510)666-2900 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
1947 Center St Ste 600 Berkeley CA US 94704-1198 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Secure &Trustworthy Cyberspace |
Primary Program Source: |
01002122DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Despite advances in computer security, there are still situations in which users must manually perform computer security tasks (e.g., rebooting to apply updates). Although many people recognize that these tasks are important, they still procrastinate. Procrastination is often caused by the failure to properly weigh the long-term security benefits against short-term costs and the annoyance of interrupting the primary task. Researchers in decision-making and behavioral economics have studied this phenomenon of biased weighting for decades and yielded viable techniques for overcoming it in various domains, including health, savings, and charitable giving. Through this multidisciplinary research agenda, the investigators are empirically examining how these techniques can best be applied to computer security. The initial focus of this research is an investigation of how time commitments can encourage compliance with security tasks such as upgrades.
Various techniques to increase security compliance rates have been examined but none address a root cause of the problem: present bias. Present bias is the tendency to discount future risks and gains in favor of immediate gratifications. Based on insights from the field of behavioral economics, this project involves empirical studies to examine when and under what conditions commitment nudges, amongst other persuasion techniques aimed at countering present bias, can be used to improve security behaviors. Through the research team's joint expertise in computer security, human-computer interaction, decision-making, psychology, and behavioral economics, it is performing experiments to yield actionable insights on the design of future computer security user interfaces.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Despite recent advances in increasing computer security through automation, there are still situations in which humans must manually perform computer security tasks. These tasks may include enabling automatic updates, rebooting machines to apply those updates, configuring automatic backups, or enrolling in two-factor authentication. However, despite viewing these tasks as important for security, many people still choose to ignore them. Two decades of usable security research have shown that these tasks are often seen as annoyances because they are almost never the user's primary objective. It is therefore no surprise that these tasks are postponed, in some cases indefinitely, leaving users vulnerable for extended periods of time. While research has shown that security can be increased by removing the human-in-the-loop through increased automation, there are still many important security tasks that require human action.
Researchers and technology companies have tested various techniques to increase security compliance rates, but to our knowledge, no attempts have been made to address what we consider to be the root cause of the problem: present bias. Present bias is the tendency to discount future risks and gains in favor of immediate gratifications. Some effective techniques for countering present bias include the use of commitment devices, persuasion profiling, and affective computing, which have all been shown to be effective across multiple research fields, such as decision-making and psychology. Based on insights from recent developments in behavioral economics on the effectiveness of pre-commitment nudges and persuasion techniques in other realms, such as saving for retirement or charitable giving, we propose a research agenda to explore when and under what conditions commitment nudges, amongst other techniques aimed at countering present bias, can be used to improve users' security behaviors.
We performed multiple experiments across specific security application areas that are impacted by present bias, such as applying system software updates, changing compromised passwords, enrollment in two-factor authentication, and configuring automatic backups. Through these experiments, we identified the situations and circumstances in which nudges are likely to overcome present bias, as well as the specific circumstances in which nudges do not appear to have an appreciable effect. These results will allow system designers to create more effective user interfaces and messaging that is likely to lead to better security decision making on behalf of their end users.
Last Modified: 10/01/2024
Modified by: Serge M Egelman
Please report errors in award information by writing to: awardsearch@nsf.gov.