
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | October 22, 2018 |
Latest Amendment Date: | February 13, 2020 |
Award Number: | 1855706 |
Award Instrument: | Standard Grant |
Program Manager: |
Danella Zhao
CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | November 1, 2018 |
End Date: | September 30, 2022 (Estimated) |
Total Intended Award Amount: | $300,000.00 |
Total Awarded Amount to Date: | $316,000.00 |
Funds Obligated to Date: |
FY 2019 = $16,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
307 N UNIVERSITY BLVD MOBILE AL US 36608-3053 (251)460-6333 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
150 Jaguar Drive Mobile AL US 36688-0002 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Software & Hardware Foundation |
Primary Program Source: |
01001920DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Mobile devices, such as smart phones, are being increasingly utilized for watching videos, since they can be conveniently used for this purpose anywhere anytime, such as commuting on a subway or train, sitting in a waiting room, or lounging at home. Due to the large data size and intensive computation, video processing requires frequent memory access that consumes a large amount of power, limiting battery life and frustrating mobile users. On one hand, memory designers are focusing on hardware-level power-optimization techniques without considering how hardware performance influences viewers' actual experience. On the other hand, the human visual system is limited in its ability to detect subtle degradations in image quality; for example, under conditions of high ambient illumination, such as outdoors in direct sunlight, the veiling luminance (i.e., glare) on the screen of a mobile device can effectively mask imperfections in the image, so that under these circumstances a video can be rendered in lower than full quality without the viewer being able to detect any difference. This isolation between hardware design and viewer experience significantly increases hardware implementation overhead due to overly pessimistic design margins. This project integrates viewer-awareness and hardware adaptation to achieve power optimization without degrading video quality, as perceived by users. The results of this project will impact both basic research on hardware design and human vision, and provide critical viewer awareness data from human subjects, which can be used to engineer better video rendering for increased battery life on mobile devices. The project will directly involve undergraduate and graduate students, including females and Native Americans, in interdisciplinary research.
Developing a viewer-aware mobile video-memory solution has proven to be a very challenging problem due to (i) complex existing viewer-experience models; (ii) memory modules without runtime adaptation; and (iii) the difficulty of viewer-experience analysis for hardware designers. This project addresses the problem by (i) focusing on the most influential viewing-context factor impacting viewer experience - ambient luminance; (ii) proposing novel methodologies for adaptive hardware design; and (iii) integrating a unique combination of expertise from the investigators, ranging from psychology to Integrated Circuit design and embedded systems. Specifically, this project will (i) experimentally and mathematically connect viewer experience, ambient illuminance, and memory performance; (ii) develop energy-quality adaptive hardware that can adjust memory usage based on ambient luminance so as to reduce power usage without impacting viewer experience; and (iii) design a mobile video system to fully evaluate the effectiveness of the developed methodologies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The goal of this SHF project was to develop viewer-aware mobile video-memory design techniques by integrating mobile viewer-awareness with hardware adaptation to achieve power optimization. It addressed a number of critical challenges to viewer-aware video hardware design from three aspects: (1) experimentally and mathematically connect viewer experience, ambient illuminance, and memory performance; (2) develop energy-quality adaptive hardware that can adjust memory usage based on ambient luminance so as to reduce power usage without impacting viewer experience; and (3) design a mobile video system for verification.
We conducted psychophysical experiments on viewer experience in different viewing contexts, which illustrated that if a video system is operating in poor light conditions, viewers can tolerate larger amounts of quality degradation, such that more least-significant-bits (LSBs) of video data can be truncated to save energy without impacting perceived video quality. We also assessed the degree to which human observers were able to discriminate between reference videos and bit-truncated versions that were rendered and displayed at various levels of objective quality as a function of the ambient-illumination level. Based on the experimental results, we developed expectation-based mathematical models to connect video quality and memory performance. The results of the numerical studies on embedded memory design showed that the developed models provide a useful and fast tool to enable optimal hardware designs.
We studied viewer-aware hardware-implementation schemes for dynamic energy-quality knobs with minimal effect on viewer experience, such as voltage scaling, bitcell structures, LSB truncations, error correction code (ECC) schemes, device sizing, hardening most significant bits (MSBs), and hardening parity bits. As an example, we developed a novel viewer-aware bit truncation technique that enables better visual experience while maintaining similar power efficiency. We also designed a flexible power-efficient video memory that can dynamically adjust the strength of ECC, thereby enabling power-quality trade-off based on application requirements. We developed a new adaptive ECC technique, which can effectively select three power-quality tradeoff levels for video applications: hamming code-74, hamming code-1511, and no ECC. Using 1,000 randomly selected videos, our results showed that the developed memory enables runtime quality adaptation with significantly reduced overhead and better video quality, as compared to existing techniques.
Additionally, the impact of video content on viewer experience was studied from the psychological perspective. Our research demonstrated the correlation characteristics between “banding distortion” to viewers caused by hardware noise and the areas in frames that exhibit low variance among pixel luminance values, which has the potential to enable content-adaptation opportunities for hardware design. Based on macroblock characteristics analysis and subjective video testing, two models, decision tree and logistic regression, were developed to enable effective connection of the video content to the hardware design process. We further developed a novel viewer-aware bit truncation technique that enables better visual experience while maintaining similar power efficiency. Based on the developed models and viewer-aware bit truncation technique, a content-adaptive video memory design with dynamic energy-quality tradeoff was implemented, which enables up to 33.31% power savings.
We also developed a Region-of-Interest (ROI)-aware video storage technique that takes advantage of deep learning to identify the most important areas to optimize the video output quality while saving power consumption. Based on this, we designed a content-adaptable ROI-aware video system to support general videos, and conducted system testing, including power efficiency, number of truncated bits, output quality, and area overhead, which shows that this proposed memory enables run‐time quality adaptation with significantly reduced pixel bits and further power savings, as compared to existing techniques.
The outcomes of this project have been widely disseminated to the research community and general public. This project directly trained over ten graduate students and undergraduate students (funded through supplements), including four underrepresented (female and minority) students. Among them, one MS and three PhD students have successfully defended their theses/dissertations. The team published five journal papers at prestigious venues such as IEEE Transactions on Very Large Scale Integration (VLSI) Systems, IEEE Transactions on Sustainable Computing, and IEEE Access. The findings of this project have been integrated into the PIs’ existing undergraduate and graduate courses; and the project further promoted K-12 STEM and computing education community outreach through the team’s organized outreach events and on-going NSF-funded RET award.
Last Modified: 01/26/2023
Modified by: Na Gong
Please report errors in award information by writing to: awardsearch@nsf.gov.