Award Abstract # 1548114
EAGER: Exploring the Use of Deception to Enhance Cyber Security

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: PURDUE UNIVERSITY
Initial Amendment Date: August 4, 2015
Latest Amendment Date: August 4, 2015
Award Number: 1548114
Award Instrument: Standard Grant
Program Manager: Nina Amla
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 1, 2015
End Date: May 31, 2018 (Estimated)
Total Intended Award Amount: $182,299.00
Total Awarded Amount to Date: $182,299.00
Funds Obligated to Date: FY 2015 = $182,299.00
History of Investigator:
  • Eugene Spafford (Principal Investigator)
  • Saurabh Bagchi (Co-Principal Investigator)
Recipient Sponsored Research Office: Purdue University
2550 NORTHWESTERN AVE # 1100
WEST LAFAYETTE
IN  US  47906-1332
(765)494-1055
Sponsor Congressional District: 04
Primary Place of Performance: Purdue University
656 Oval Dr
West Lafayette
IN  US  47907-2086
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): YRXVL4JYCEF5
Parent UEI: YRXVL4JYCEF5
NSF Program(s): Secure &Trustworthy Cyberspace
Primary Program Source: 01001516DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7434, 7916
Program Element Code(s): 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Our computing systems are constantly under attack, by everyone from pranksters to agents of hostile nations. Many of those systems and networks make the task of the adversary easier by responding to attacks with useful information. This is because software and protocols have been written for decades to provide informative feedback for error detection and correction. It is precisely this behavior that enhances the chances of success by attackers, by allowing them to map networks and determine system flaws. This research addresses the question "Are there uses of deceptive responses that help prevent successful attacks?" Furthermore, the study investigates if it is possible to characterize and model the types of situations where deception may be useful. The result of this work provides cyber system designers with some new defensive measures, and guidance as to when they are useful to deploy.

The project includes two related lines of research. The first of these is to explore some new applications of deceit in system defense. The researchers investigate presenting deceptive responses to attempts to exploit known vulnerabilities, and building a file system that "lies" about the creation and deletion of key files. Each of these mechanisms should support a system's security by providing early warning of bad behavior as well as blunting attacks. Deceitful responses to attacks can lead a perpetrator to employ ineffective attacks, thus wasting time and effort. A deceptive file system can capture forensic data about an attempted attack while only appearing to allow the installation of malicious files. The second line of research explores how to apply hypergame models to cyber defenses using deceptive techniques. Hypergames are an extension of game theory that includes incorrect and uncertain information. By constructing hypergame models we should be able to determine situations where there is a favorable result when deception is employed as a defense.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Avery, J., Almeshekah, M., & Spafford, E. "Offensive Deception in Computing" 12th International Conference on Cyber Warfare and Security 2017 Proceedings , 2017 , p.23 1897684061
C, Gutierrez, M. Almeshekah, E. H. Spafford, M.J. Ayatollah, J. Avery "ErsatzPassword: Ending Password Cracking and Detecting PasswordLeakage" ACM Transactions on Information and Systems Security , v.19 , 2017 https://doi.org/10.1145/2996457
Christopher N. Gutierrez, Eugene H. Spafford, Saurabh Bagchi, Thomas Yurek "Reactive Redundancy for Data Destruction Protection (R2D2)" Computers & Security , v.74 , 2018 , p.184 https://doi.org/10.1016/j.cose.2017.12.012
Christopher N. Gutierrez, Mohammed H. Almeshekah, Eugene H. Spafford, Mikhail J. Atallah, and Jeff Avery "Inhibiting and Detecting Offline Password Cracking Using ErsatzPasswords" ACM Transactions on Privacy and Security (TOPS) , v.19 , 2016 , p.9 https://doi.org/10.1145/2996457
J. Avery, M. Almeshekah, and E. H. Spafford "Offensive Deception in Computing" 12th International Conference on Cyber Warfare and Security , 2017 , p.23 1897684061
Jeffrey Avery and E. H. Spafford "Ghost Patches: Fake Patches for Fake Vulnerabilities" 32nd International Conference on ICT Systems Security and Privacy Protection (IFIP) , 2017 , p.399 https://doi.org/10.1007/978-3-319-58469-0_27
Jeffrey Avery and John Ross Wallrabenstein "Formally modeling deceptive patches using a game-based approach" Computers & Security , v.75 , 2018 , p.182 https://doi.org/10.1016/j.cose.2018.02.009

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This project was devoted to the exploration of how deception could be used to add to the defenses of computer and network systems ("cyber").  Most current computing systems provide helpful error messages and feedback for purposes of debugging and enhanced user experience, but that same feedback may assist attackers in exploiting weaknesses.


Our project examined how providing false information and decoys could be used to slow, mislead, or expose attackers.  Our efforts involved four separate but related efforts, described below.


The first effort was to develop a formal taxonomy of deceptive practices, including masking, decoys, and providing misleading information.  This effort resulted in a published, comprehensive model that can be used to classify every known form of cyber deception.


The second effort was to explore the effectiveness of providing false information in patches for security problems.  Patching is widely used to fix flaws and functionality, but the patches may be reverse-engineering to find attack points.  We explored methods of obfuscating the patches and providing false patches ("ghost patches") to slow or confuse attackers.  An extensive analysis and experiments with prototypes showed this had some minor promise, but would not be highly effective against sophisticated attackers.

 
We concluded that providing new instances of the entire software artifact using existing methods of obfuscation and confusion would likely be more effective in slowing reverse engineering analysis and attack than would be releasing "ghost" patches.  We determined there might be more involved methods of building false and illusory patches, but we did not fully explore them; we determined that complicated false patching would make legitimate maintenance and debugging more difficult, and thus would be of questionable value.


Our third effort was to explore how to provide a misleading view of secondary memory (disk) to an attacker.  Attackers often delete system resources, alter logs, and remove malware traces to hide malicious activity.  If those alterations and deleted items were available to the system defenders, it would provide an advantage in both recovery and forensics. However, it is essential to let the attacker believe that the alterations and deletions are occurring so that countermeasures are not taken, or to ensure that the attacker does not terminate the attack before sufficient evidence is collected.


We constructed two prototype systems that collect the state of artifacts that an attacker might change or delete yet are not visible to the end user.  One of these was part of a running OS, and the other was a modified virtual machine monitor.  In each case, we explored making the capture of data undetectable to the user, yet of sufficient fidelity to allow reconstruction of the system before the attack, and capture of any uploads.  We found that our final system, R2D2, worked well enough that it was also able to detect and counter ransomware attacks that would attempt to encrypt vast portions of the disk -- we were able to recover the entire disc contents after such an attack. We determined how to use a user-specified parameter to trade off the amount of extra storage used and the level of protection. 


Our fourth effort was to explore the use of an extension to game theory using incomplete information -- hypergames -- to model the deployment and use of deception.  Our goal was to explore if hypergames might be used to determine when deception was appropriate to use, and when it might not be effective.  We constructed some simple models using single rounds of deception and found that using deception mechanisms were effective in cases where the attacker suspected deception was present.   As of the end of this portion of the project, we have yet to explore more complex hypergames models of deception, such as, through multiple rounds of “games” between the attacker and the defender. 


In addition to 15 publications on our research results in conferences and journals, we also produced several software prototypes that are available for further experimentation by others.  Three students completed their Ph.D. degrees while working on this project, and several other graduate and undergraduate students obtained practical training in systems development and defense.


Both principal investigators on this project are planning on future research in the issues raised during the effort.  They are also exploring technology transfer opportunities to employ some of the lessons learned in real-world products.

 


Last Modified: 07/06/2018
Modified by: Eugene H Spafford

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page