Award Abstract # 1910309
CIF: Small: Fundamental Tradeoffs Between Communication Load and Storage Resources in Distributed systems

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: UNIVERSITY OF ILLINOIS
Initial Amendment Date: July 16, 2019
Latest Amendment Date: February 28, 2020
Award Number: 1910309
Award Instrument: Standard Grant
Program Manager: Phillip Regalia
pregalia@nsf.gov
 (703)292-2981
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2019
End Date: September 30, 2024 (Estimated)
Total Intended Award Amount: $475,000.00
Total Awarded Amount to Date: $499,000.00
Funds Obligated to Date: FY 2019 = $475,000.00
FY 2020 = $24,000.00
History of Investigator:
  • Daniela Tuninetti (Principal Investigator)
    danielat@uic.edu
Recipient Sponsored Research Office: University of Illinois at Chicago
809 S MARSHFIELD AVE M/C 551
CHICAGO
IL  US  60612-4305
(312)996-2862
Sponsor Congressional District: 07
Primary Place of Performance: University of Illinois at Chicago
IL  US  60612-4305
Primary Place of Performance
Congressional District:
07
Unique Entity Identifier (UEI): W8XEAJDKMXH3
Parent UEI:
NSF Program(s): Comm & Information Foundations
Primary Program Source: 01001920DB NSF RESEARCH & RELATED ACTIVIT
01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 7935, 9102, 9178, 9251
Program Element Code(s): 779700
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Future data communication networks will have to support multitude of devices abundantly spread all over, collecting, processing and transferring data on the fly. Such networks are expected to struggle to meet the demand from such data communication needs. At the same time, the devices can leverage their inexpensive and distributed memory units to reduce the cost of exchanging information. A key question is which data should be placed in the memories so that, irrespective of what the devices will be requested to compute in the future, the amount of communication to satisfy these requests will be the smallest possible. The more devices the system can satisfy with a single transmission, the larger the savings are in terms of reduction of the number of transmissions. These savings translate to reduced communication costs and an ability to run computation-intensive and latency-critical applications at resource-limited devices. This project aims to develop the theoretical and algorithmic foundation for distributed computation in presence of local and limited storage resources for future hybrid hierarchical networks, where distributed devices collaborate to solve big-data inference tasks. Despite their fundamental nature, the results of this research are impact the design of emerging communication models in industry. This project also develops a rich educational program for students, who will acquire critical skills to be successful in a competitive, diverse, and global workforce market.

This research identifies critical questions related to communication in distributed peer-to-peer settings with local limited storage resources. The technical aims of the project are divided into two related thrusts: distributed cache-aided "Fog Radio Access Network" architectures and peer-to-peer distributed data shuffling. The former problem models the hybrid network architecture envisaged for 5G wireless networks, while the latter finds applications in distributed computation in big data and machine learning algorithms. Both share the caching theme, i.e., leverage local storage to reduce global communication load, but more importantly leverage the distributed nature of the encoding in either the delivery or data shuffling phase. The concomitant technical questions are novel in many aspects, and span information theory, coding theory, and combinatorics. The research includes innovative approaches to derive converse and achievable bounds, with provable performance guarantees. The overarching goal is to develop a fundamental framework for distributed computation, with implications to other open problems such as distributed index coding and distributed interference alignment.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 28)
Liu, Tang and Tuninetti, Daniela "Decentralized Pliable Index Coding" 2019 IEEE International Symposium on Information Theory (ISIT) , 2019 https://doi.org/10.1109/ISIT.2019.8849854 Citation Details
Liu, Tang and Tuninetti, Daniela "Optimal Linear Coding Schemes for the Secure Decentralized Pliable Index Coding Problem" 2020 IEEE Information Theory Workshop (ITW) , 2021 https://doi.org/10.1109/ITW46852.2021.9457649 Citation Details
Ma, Yinbin and Tuninetti, Daniela "A General Coded Caching Scheme for Scalar Linear Function Retrieval" 2021 IEEE International Symposium on Information Theory (ISIT) , 2021 https://doi.org/10.1109/ISIT45174.2021.9518262 Citation Details
Ma, Yinbin and Tuninetti, Daniela "A General Coded Caching Scheme for Scalar Linear Function Retrieval" IEEE Journal on Selected Areas in Information Theory , v.3 , 2022 https://doi.org/10.1109/JSAIT.2022.3182706 Citation Details
Ma, Yinbin and Tuninetti, Daniela "An Achievable Scheme for the K-user Linear Computation Broadcast Channel" , 2025 Citation Details
Ma, Yinbin and Tuninetti, Daniela "Demand Privacy in Hotplug Caching Systems" IEEE ISIT 2023 , 2023 https://doi.org/10.1109/ISIT54713.2023.10206467 Citation Details
Ma, Yinbin and Tuninetti, Daniela "On Coded Caching Systems with Offline Users" 2022 IEEE International Symposium on Information Theory , 2022 https://doi.org/10.1109/ISIT50566.2022.9834866 Citation Details
Ma, Yinbin and Tuninetti, Daniela "On Demand-Private Hotplug Caching Systems" , 2024 https://doi.org/10.1109/ISIT57864.2024.10619379 Citation Details
Wan, Kai and Sun, Hua and Ji, Mingyue and Tuninetti, Daniela and Caire, Giuseppe "Cache-Aided General Linear Function Retrieval" Entropy , v.23 , 2021 https://doi.org/10.3390/e23010025 Citation Details
Wan, Kai and Sun, Hua and Ji, Mingyue and Tuninetti, Daniela and Caire, Giuseppe "Cache-Aided Matrix Multiplication Retrieval" 2021 IEEE International Symposium on Information Theory (ISIT) , 2021 https://doi.org/10.1109/ISIT45174.2021.9518042 Citation Details
Wan, Kai and Sun, Hua and Ji, Mingyue and Tuninetti, Daniela and Caire, Giuseppe "Cache-Aided Matrix Multiplication Retrieval" IEEE Transactions on Information Theory , v.68 , 2022 https://doi.org/10.1109/TIT.2022.3157835 Citation Details
(Showing: 1 - 10 of 28)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This project focused on improving how data is stored and shared efficiently across large computer networks, particularly in environments where many users need access to data or wish to perform computations on it. This is especially relevant for modern applications such as distributed computing at the edge. In a technique known as coded caching, portions of data are pre-stored—or “cached”—on users’ devices. When users later make requests, the system sends carefully designed messages that leverage the cached content, significantly reducing the amount of data that needs to be transmitted. This approach saves bandwidth and improves response times. Traditionally, coded caching has been used to deliver individual files to users. Our research extended this framework to support not just the delivery of files, but also the delivery of computation results derived from the data, in distributed settings. This advancement is critical for real-world applications such as smart cities and health monitoring, where users are often interested in processed information rather than raw data.

We introduced new models that consider more realistic challenges, such as users joining and leaving the system, and ensuring privacy and security of the computations. We developed new coding strategies that balance efficiency and robustness. These strategies allow the system to continue functioning even if some users become inactive or if security protections are needed. We analyzed different scenarios depending on how much memory is available to store data in advance, and we were able to describe both what is possible to achieve (achievable bounds) and what is fundamentally not possible (converse bounds).

The project contributed to education and training. We supported and mentored 1 PhD student, and 8 undergraduate students, who gained hands-on experience with research in data systems and information theory.

The research has resulted in 6 published conference papers, where early results were shared with the broader scientific community; as well as 2 journal articles (1 currently under review), presenting more comprehensive findings.

In summary, this project made progress in understanding how to build smarter, more efficient, and more secure systems for distributed data delivery and computation in networks. The results have potential applications in communication systems, cloud and edge computing, and secure data processing—while also contributing to the training of the next generation of researchers.

 


Last Modified: 04/15/2025
Modified by: Daniela Tuninetti

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page