
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | June 10, 2016 |
Latest Amendment Date: | April 17, 2018 |
Award Number: | 1618117 |
Award Instrument: | Standard Grant |
Program Manager: |
Sol Greenspan
sgreensp@nsf.gov (703)292-7841 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | July 1, 2016 |
End Date: | June 30, 2020 (Estimated) |
Total Intended Award Amount: | $200,000.00 |
Total Awarded Amount to Date: | $232,000.00 |
Funds Obligated to Date: |
FY 2017 = $16,000.00 FY 2018 = $16,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
550 S COLLEGE AVE NEWARK DE US 19713-1324 (302)831-2136 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
312 DuPont Hall Newark DE US 19716-2553 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
Special Projects - CNS, Secure &Trustworthy Cyberspace |
Primary Program Source: |
01001718DB NSF RESEARCH & RELATED ACTIVIT 01001819DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Living in an age when services are often rated, people are increasingly depending on reputation of sellers or products/apps when making purchases online. This puts pressure on people to gain and maintain a high reputation by offering reliable and high-quality services and/or products, which benefits the society at large. Unfortunately, due to extremely high competition in e-commerce or app stores, recently reputation manipulation related services have quickly developed into a sizable business, which is termed Reputation-Escalation-as-a-Service (REaaS). As REaaS attacks grow in scale, effective countermeasures must be designed to detect and defend against them.
This research addresses REaaS from two aspects. First, it aims to understand the economics of REaaS by conducting empirical studies of e-markets. Second, it aims to develop defensive measures, which involve both technical approaches and market intervention. The technical approaches focus on detection of REaaS from e-markets, and novel detection techniques will be developed using content analysis, machine learning, social ties, and graph theory. For market invention, after a holistic analysis of REaaS, this research aims to identify its bottleneck (the weakest link) and also measure the efficacy of intervention. The outcome of this data-driven security research will enhance security education with labs based on social-economic data analysis. The success of this research will attract more attention of industry practitioners, government sectors, and academia to jointly tackle the REaaS problem.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
In this project, we developed effective solutions to address Reputation-Escalation-as-a-Service (REaaS) in different application domains, including E-commerce market, recommender systems, online reviews, and mobile app stores.
E-commerce portals is among the sites hit hardest by malicious bots: about 20% of traffic to e-commerce portals is from malicious bots; malicious bots even generated up to 70% of Amazon.com traffic. As one of the largest e-commerce companies in the world, Alibaba also observed a certain amount of malicious bot traffic to its two main subsidiary sites, i.e., Taobao.com and Tmall.com. In this project, we developed a novel and efficient approach for detecting web bot traffic. We then deployed and evaluated the approach on Taobao/Tmall platforms, and it shows that our detection approach performed well on those large websites by identifying a large set of IP addresses (IPs) used by malicious web bots. Then, we conducted an in-deep behavioral analysis on a sample of web bot traffic to better understand the distinguishable characteristics of web bot traffic from normal web traffic initiated by human users. The analysis results reveal their differences in terms of active time, search queries, item and store preferences, and many other aspects. These findings provide new insights for public websites to further improve web bot traffic detection for protecting valuable web contents.
Recommender systems have been increasingly used in a variety of web services, providing a list of recommended items in which a user may have an interest. While important, recommender systems are vulnerable to various malicious attacks. Inthis project, we studied a new security vulnerability in recommender systems caused by web injection, through which malicious actors stealthily tamper any unprotected in-transit HTTP webpage content and force victims to visit specific items in some web services (even running HTTPS), e.g., YouTube. By doing so, malicious actors can promote their targeted items in those web services. To obtain a deeper understanding on the recommender systems of our interest (including YouTube, Yelp, Taobao, and 360 App market), we first conducted a measurement-based analysis on several real-world recommender systems by leveraging machine learning algorithms. Then, we implemented web injection in three different types of devices (i.e., computer, router, and proxy server) to investigate the scenarios where web injection could occur. Based on the implementation of web injection, we demonstrated that it is feasible and sometimes effective to manipulate the real-world recommender systems through web injection. We also developed several countermeasures against such manipulations.
Online reviews, which play a crucial role in the ecosystem of nowadays business, have been the primary source of consumer opinions. Due to their importance, professional review writing services are employed for paid reviews and even being exploited to conduct opinion spam. Posting deceptive reviews could mislead customers, yield significant benefits or losses to service vendors, and erode confidence in the entire online purchasing ecosystem. In this project, We developed a novel approach to detecting deceptive reviews written by professional review writers. We leveraged authorship attribution to identify the writing style of reviewers, and employed a multiview clustering method to group authors with similar writing style. In addition, we compared different neural network models for modeling deceptive writing styles. We compared the performance of different classifiers showing that a convolutional neural network has the best overall detection performance, achieving 90% accuracy. Finally, we evaluated the effectiveness of the multiview clustering framework based on large scale Amazon datasets as a case study and we demonstrated that clustering method outperforms existing K-means andhierarchical methods.
In order to promote apps in mobile app stores, for malicious developers and users, manipulating average rating is a popular and feasible way. In this project, we developed a two-phase machine learning approach to detecting app rating manipulation attacks. In the first learning phase, we generated feature ranks for different app stores and found that top features match the characteristics of abused apps and malicious users. In the second learning phase, we chose top N features and trained our models for each app store. With cross-validation, our training models achieve 85% f-score. We also used our training models to discover new suspicious apps from our data set and evaluated them with two criteria. Finally, we conducted some analysis based on the suspicious apps classified by our training models and some interesting results are discovered. The average review lengths of higher rating levels are lower than those of lower rating levels. The higher rating level, the shorter its reviews are. This is because the higher rating apps get, the less bugs they have and users usually use more words to complain and report bugs in lower level reviews.
Last Modified: 07/10/2020
Modified by: Haining Wang
Please report errors in award information by writing to: awardsearch@nsf.gov.