
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | June 26, 2017 |
Latest Amendment Date: | June 26, 2017 |
Award Number: | 1718470 |
Award Instrument: | Standard Grant |
Program Manager: |
Phillip Regalia
pregalia@nsf.gov (703)292-2981 CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | July 1, 2017 |
End Date: | June 30, 2021 (Estimated) |
Total Intended Award Amount: | $449,996.00 |
Total Awarded Amount to Date: | $449,996.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1350 BEARDSHEAR HALL AMES IA US 50011-2103 (515)294-5225 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
IA US 50011-2207 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Comm & Information Foundations |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Content caching plays an important role in facilitating large scale content distribution over the Internet. Traditional caching techniques typically store relatively popular content in the local memory (or cache) available at the users, or at the edge of the network. If a given user request can be serviced from the cache (instead of the server), the overall network traffic is reduced. This research will investigate practical issues in the deployment of coded caching, which is a technique that promises huge reductions in caching network traffic, albeit under potentially restrictive assumptions. This will pave the way for the adoption of coded caching in large scale video and audio streaming websites thus improving the efficiency of the national network infrastructure.
Minimizing per-user delays is one of the main aims of caching. For example, a given video-on-demand user wants his/her video playback to start as soon as possible. Moreover, it is not reasonable to expect that several users want to watch videos at the same time. The investigator will study practical algorithms that a server can use for responding to asynchronous user requests. The algorithms will continue to leverage the rate reductions of coded caching while meeting user-specified deadlines.
Another issue with coded caching is that it assumes that the files at the server can be subdivided into a large number of sub-files. Specifically, this number grows exponentially with the number of users and is impractically large even for systems with fifty users. The investigator will study the of design coded caching schemes where the subpacketization level is manageable, yet significant rate reductions are possible as compared to conventional schemes.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Content caching plays an important role in facilitating large scale content distribution over the Internet. Traditional caching techniques typically store relatively popular content in the local memory (or cache) available at the users, or at the edge of the network. If a given user request can be serviced from the cache (instead of the server), the overall network traffic is reduced.
Coded caching is a technique that uses ideas of information theory to significantly reduce the induced network traffic within caching networks. The basic idea is to judiciously combine information from different files to satisfy the requirement of different users. In theory, coded caching allows for huge reductions in the network traffic in caching networks. Nevertheless, there are several important practical issues that need to be addressed before the promised gains of coded caching can be realized in real-world scenarios.
For instance, the original proposals assume that all users request the content synchronously, i.e., at the same time. We note here that minimizing end user delays is one of the main aims of caching. For example, a given online video-on-demand user wants his/her video playback to start as soon as possible. Thus, expecting this user to wait for other users to also request content from the server is not a reasonable assumption. The first part of our work provides provably efficient algorithms that take the asynchronism in the users into account. The algorithms minimize the induced network traffic while ensuring that the delay constraints of each user are met.
Another important issue with current coded caching proposals is that they are only applicable in the limit of very large files. The second part of our work provides judicious designs that address this issue. Our schemes leverage much of the gains of coded caching even for typical video files (around 10 GB). Furthermore, our work uncovers a deep relationship between this problem and combinatorial design theory which is branch of mathematics. The synergy between these fields has resulted in very interesting findings.
This project has supported the graduate education of several students who have gone on to successful careers in industry and academia, thus enhancing the workforce of the United States of America.
Last Modified: 11/07/2021
Modified by: Aditya Ramamoorthy
Please report errors in award information by writing to: awardsearch@nsf.gov.