
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | February 28, 2014 |
Latest Amendment Date: | May 5, 2014 |
Award Number: | 1434582 |
Award Instrument: | Standard Grant |
Program Manager: |
Sol Greenspan
sgreensp@nsf.gov (703)292-7841 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | July 1, 2013 |
End Date: | July 31, 2017 (Estimated) |
Total Intended Award Amount: | $250,000.00 |
Total Awarded Amount to Date: | $265,960.00 |
Funds Obligated to Date: |
FY 2014 = $15,960.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
506 S WRIGHT ST URBANA IL US 61801-3620 (217)333-2187 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
CHAMPAIGN IL US 61820-7473 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Special Projects - CNS, Secure &Trustworthy Cyberspace |
Primary Program Source: |
01001415DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
The security of critical information infrastructures depends upon effective techniques to detect vulnerabilities commonly exploited by malicious attacks. Due to poor coding practices or human error, a known vulnerability discovered and patched in one code location may often exist in many other unpatched code locations, either in the same code base or other code bases. Furthermore, patches are often error-prone, resulting in new vulnerabilities. This project develops practical techniques for detecting code-level similarity to prevent such vulnerabilities. It has the potential to help build a more reliable and secure information system infrastructure, which will have tremendous economical impact on society because of our growing reliance on information technologies.
In particular, the project aims to develop practical techniques for similarity-based testing and analysis to detect unpatched vulnerable code and validate patches to the detected vulnerable code at both the source code and binary levels. To this end, it focuses on three main technical directions: (1) developing techniques for detecting source-level vulnerabilities by adapting and refining an industrial-strength tool, (2) developing capabilities of detecting binary-level vulnerabilities by extending preliminary work on detecting code clones in binaries, and (3) supporting patch validation and repair by developing methodologies and techniques to validate software patches and help produce correct, secure patches. This project helps discover new techniques for source- and binary-level vulnerability analysis and gain better understandings of the fundamental and practical challenges for building highly secure and reliable software.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
For the duration of this NSF project, we, the principal investigator and graduate students, have introduced and developed practical testing and analysis techniques to detect defects and validate patches at source and binary levels, and effectively recover from program runtime failures. This project has helped discover new analysis and runtime recovery techniques, and helped gain better understandings of the fundamental and practical challenges for building highly secure and reliable software. The conducted research and educational activities have advanced the state-of-the-art in improving software security and reliability, and helped train next generation researchers and engineers. More specifically,
- We have developed three metrics that approximate the computation of behavioral similarity using dynamic analysis for measuring behavioral similarity of programs. We leverage random testing and dynamic symbolic execution (DSE) to generate test inputs, and run programs on these test inputs to compute metric values. The metric based on random testing provides highly accurate approximations to the behavioral similarity and the metric based on DSE is very effective in ordering programs based on behavioral similarity.
- We have developed an approach of static program analysis that extracts the contexts of security-sensitive behaviors to assist mobile app analysis in differentiating between malicious and benign behaviors. The malicious and benign behaviors within apps can be differentiated based on the contexts that trigger security-sensitive behaviors, i.e., the events and conditions that cause the security-sensitive behaviors to occur. The maliciousness of a security-sensitive behavior is more closely related to the intention of the behavior (reflected via contexts) than the type of the security-sensitive resources that the behavior accesses.
- We have developed an approach to prioritizing Android device models for individual mobile apps, based on mining large-scale usage data. The approach adapts the concept of operational profiling for mobile apps: the usage of an app on a specific device model reflects the importance of that device model for the app. The approach includes a collaborative filtering technique to predict the usage of an app on different device models, even if the app is entirely new, based on the usage data of a large collection of apps.
- We have developed an automated approach for resolving the semantics of user inputs requested by mobile applications. The approach's design includes a number of novel techniques for extracting and resolving user interface labels and addressing ambiguity in semantics, resulting in significant improvements over prior work. Such work enables the clustering of similar user inputs/apps together for security analysis.
- We have developed an approach to prioritizing test cases in performance regression testing for collection-intensive software, a common type of modern software heavily using collections. The approach for test prioritization is based on performance impact analysis that estimates the performance impact of a given code revision on a given test execution.
- We have disseminated our research results through publications in top outlets such as highly-competitive conferences, tool distributions and research exchanges.
- We have successfully trained the next generation computer scientists through graduate/undergraduate student advising, and engineers through undergraduate and graduate level education.
Last Modified: 01/18/2018
Modified by: Tao Xie
Please report errors in award information by writing to: awardsearch@nsf.gov.