Award Abstract # 1553437
CAREER: Trustworthy Social Systems Using Network Science

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: THE TRUSTEES OF PRINCETON UNIVERSITY
Initial Amendment Date: December 31, 2015
Latest Amendment Date: April 2, 2020
Award Number: 1553437
Award Instrument: Continuing Grant
Program Manager: Sara Kiesler
skiesler@nsf.gov
 (703)292-8643
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: February 1, 2016
End Date: January 31, 2022 (Estimated)
Total Intended Award Amount: $520,957.00
Total Awarded Amount to Date: $520,957.00
Funds Obligated to Date: FY 2016 = $99,931.00
FY 2017 = $100,759.00

FY 2018 = $101,188.00

FY 2019 = $111,719.00

FY 2020 = $107,360.00
History of Investigator:
  • Prateek Mittal (Principal Investigator)
Recipient Sponsored Research Office: Princeton University
1 NASSAU HALL
PRINCETON
NJ  US  08544-2001
(609)258-3090
Sponsor Congressional District: 12
Primary Place of Performance: Princeton University
87 Prospect Avenue, 2nd Floor
Princeton
NJ  US  08544-2020
Primary Place of Performance
Congressional District:
12
Unique Entity Identifier (UEI): NJ1YPQXQG7U5
Parent UEI:
NSF Program(s): Secure &Trustworthy Cyberspace
Primary Program Source: 01001617DB NSF RESEARCH & RELATED ACTIVIT
01001718DB NSF RESEARCH & RELATED ACTIVIT

01001819DB NSF RESEARCH & RELATED ACTIVIT

01001920DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 7434
Program Element Code(s): 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Social media systems have transformed our societal communications, including news discovery, recommendations, societal interactions, E-commerce, as well as political and governance activities. However, the rising popularity of social media systems has brought concerns about security and privacy to the forefront. This project aims to design trustworthy social systems by building on the discipline of network science. First, the project is developing techniques for analysis of social media data that protect against risks to individual privacy; new research is needed since existing approaches are unable to provide rigorous privacy guarantees. Second, the project is developing new approaches to mitigate the threat of "fake accounts" in social systems, in spite of attempts by the creators of those accounts to elude detection. Both deployed and academic approaches remain vulnerable to strategic adversaries, motivating the development of novel defense mechanisms based on network science. The findings and new designs from this research will directly impact the security and privacy of a broad class of social network users.

The private network analytics thrust builds on the ideas of differential privacy, ensuring sufficient uncertainty in results to hide individual relationships. The project introduces dependent differential privacy, which protects against disclosure of information associated with an individual, as well as mutual information privacy, an entropy-based measure. The Sybil mitigation thrust is based on the idea of adversarial machine learning: the creators of fake accounts are presumed to adapt their mechanisms to changing detection approaches. This work exploits new features, such as temporal dynamics of the network, to address this problem. Finally, the project aims to integrate the research with an educational initiative for developing pedagogical approaches and content for trustworthy social systems.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 43)
Arjun Bhagoji, Daniel Cullina, Prateek Mittal "Lower Bounds on Adversarial Robustness from Optimal Transport" NeurIPS , 2019
Arjun Bhagoji, Supriyo Chakraborty, Prateek Mittal, Seraphin Calo "Analyzing Federated Learning through an Adversarial Lens" ICML , 2019
Birge-Lee, Henry and Wang, Liang and Rexford, Jennifer and Mittal, Prateek "SICO: Surgical Interception Attacks by Manipulating BGP Communities" 2019 ACM SIGSAC Conference on Computer and Communications Security CCS. , 2019 10.1145/3319535.3363197 Citation Details
Changchang Liu and Prateek Mittal "LinkMirage: Enabling Privacy-preserving Analytics on Social Relationships" 23nd Annual Network and Distributed System Security Symposium, {NDSS} 2016, San Diego, California, USA, February 21-24, 2016 , 2016
Changchang Liu and Prateek Mittal and Supriyo Chakraborty "Dependence Makes You Vulnberable: Differential Privacy Under Dependent Tuples" 23nd Annual Network and Distributed System Security Symposium, {NDSS} 2016, San Diego, California, USA, February 21-24, 2016 , 2016
Changchang Liu, Xi He, Thee Chanyaswad, Shiqiang Wang, Prateek Mittal "Investigating Statistical Privacy Frameworks from the Perspective of Hypothesis Testing" PETS , 2019
Chong Xiang and Arjun Nitin Bhagoji and Vikash Sehwag and Prateek Mittal "PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking" 30th USENIX Security Symposium (USENIX Security 21) , 2021
Daniel Cullina, Arjun Bhagoji, Prateek Mittal "PAC-learning in the presence of evasion adversaries" Conference on Neural Information Processing Systems (NeurIPS) , 2018
David Marco Sommer and Liwei Song and Sameer Wagh and Prateek Mittal "Athena: Probabilistic Verification of Machine Unlearning" Proc. Priv. Enhancing Technol. , v.2022 , 2022 , p.268--290 10.56553/popets-2022-0072
Gerry Wan, Aaron Johnson, Ryan Wails, Sameer Wagh, Prateek Mittal "Guard Placement Attacks on Location-Based Path Selection Algorithms in Tor" PETS , 2019
Hans Hanley, Yixin Sun, Sameer Wagh, Prateek Mittal "DPSelect: A Differential Privacy Based Guard Relay Selection Algorithm for Tor" PETS , 2019
(Showing: 1 - 10 of 43)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The major goal of this project was to explore the synergy between trustworthy  systems and network/data science techniques. Our approach exploited the structural and temporal properties of networked systems and real-world data to enhance security and privacy.

Our research led to the development of algorithms and systems that (1) secured networked systems against various attacks, (2) enhanced the privacy of user communications, (3) enabled privacy-preserving data analytics, and (4) mitigated attacks against machine learning techniques.

This project led to substantial real-world impact, including (1) deployment of new defense mechanisms at Let's Encrypt, the world's largest certificate authority, that led to the secure issuance of over 1 billion TLS certificates, (2) enhancing the performance of the Tor network, (3) integration of our privacy mechanisms in Google's TensorFlow privacy,  (4) blacklisting of malicious accounts at social networks such as Twitter, (5) integration of developed systems by NEC labs, and uncovering privacy and security vulnerabilities in smart TV devices like Roku.

Our research received multiple awards, such as the 2020, 2021, and 2022 Runner Up, Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies. Our research has been broadly disseminated in the research community, and led to training opportunities for both graduate students and undergraduate students.


Last Modified: 08/31/2022
Modified by: Prateek Mittal

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page