Award Abstract # 1422031
CSR: Small: Dynamically Reconfigurable Architectures for Time-Varying Image Constraints (DRASTIC) Based on Local Modeling and User Constraint Prediction

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: UNIVERSITY OF NEW MEXICO
Initial Amendment Date: August 25, 2014
Latest Amendment Date: September 10, 2014
Award Number: 1422031
Award Instrument: Standard Grant
Program Manager: Marilyn McClure
mmcclure@nsf.gov
 (703)292-5197
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2014
End Date: September 30, 2017 (Estimated)
Total Intended Award Amount: $459,870.00
Total Awarded Amount to Date: $459,870.00
Funds Obligated to Date: FY 2014 = $459,870.00
History of Investigator:
  • Marios Pattichis (Principal Investigator)
    pattichi@unm.edu
  • Daniel Llamocca (Co-Principal Investigator)
Recipient Sponsored Research Office: University of New Mexico
1 UNIVERSITY OF NEW MEXICO
ALBUQUERQUE
NM  US  87131-0001
(505)277-4186
Sponsor Congressional District: 01
Primary Place of Performance: University of New Mexico
NM  US  87131-0001
Primary Place of Performance
Congressional District:
01
Unique Entity Identifier (UEI): F6XLTRUQJEN4
Parent UEI:
NSF Program(s): CSR-Computer Systems Research,
EPSCoR Co-Funding
Primary Program Source: 01001415DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7923, 9150
Program Element Code(s): 735400, 915000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

The use of digital video in embedded and communications systems has risen dramatically in recent years. User needs and interests in digital video vary based on video content and available computing resources such as battery life and communications bandwidth. Thus, there is a strong need to develop methods that can dynamically reconfigure hardware and software resources in real-time to respond to changing video content, user needs, or available computing resources. The proposed research will develop methods that will provide run-time management of hardware and software resources for video processing and communications that are jointly optimal in terms of energy, bandwidth, and throughput.

Digital video processing and communication often consumes the majority of computing resources and bandwidth. With the emergence of the High-Efficiency Video Coding (HEVC) standard, there is a focus on the development of parallel architecture solutions that can effectively provide real-time coding of high-resolution video while reducing the bandwidth requirements up to 50% from the previous coding standard (H.264/AVC). However, the HEVC's focus on rate-distortion optimization does not consider how computing architectures should adapt to time-varying energy constraints. The proposed research will be focused on the development of Dynamically Reconfigurable Architecture Systems for satisfying Time-varying Image processing Constraints (DRASTIC) that optimize computing resources to satisfy time-varying constraints on energy, bandwidth, and image quality for HEVC and video analysis based on 2D/3D filterbanks. The research is transformative in two different ways: (i) it supports the automatic generation of real-time varying constraints based on video content and available energy while eliminating the need for user inputs, and (ii) it uses a local model to significantly reduce the requirements for estimating large Pareto fronts over a large space of videos. These transformative approaches can significantly expand the applicability of the proposed system and methods.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 13)
Antoniou, Z.C, Panayides, A.S., Pantziaris, M., Constantinides, A.G., Pattichis, C.S., and Pattichis, M.S. "Dynamic Network Adaptation For Real-Time Medical Video Communication" Proc. of XIV Mediterranean Conference on Medical and Biological Engineering and Computing, Medicon?16 , 2016 , p.1093
Antoniou, Z., Stavrou, S., Panayides, A.S., Kyriacou, E., Constantinides "Adaptive Emergency Scenery Video Communications using HEVC for Responsive Decision Support in Disaster Incidents" IEEE EMBC , 2015 , p.173
Carranza, C., Llamocca, D., and Pattichis, M.S. "Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures" IEEE Transactions on Image Processing , v.26 , 2017 , p.2230 10.1109/TIP.2017.2678799
Carranza, C., Llamocca, D., and Pattichis, M.S.. "Fast and Scalable Computation of the Forward and Inverse Discrete Periodic Radon Transform" IEEE Transactions on Image Processing , v.25 , 2016 , p.119 http://dx.doi.org/10.1109/TIP.2015.2501725
Daniel Llamocca and Marios Pattichis "Dynamic Energy, Performance, and Accuracy Optimization and Management Using Automatically Generated Constraints forSeparable 2D FIR Filtering for Digital Video Processing" ACM Transactions on Reconfigurable Technology and Systems , v.7 , 2015 http://dx.doi.org/10.1145/2629623
Eilar, C., Jatla, V., Pattichis, M. S., Celedón-Pattichis, S., & LópezLeiva, C. A. "Distributed Video Analysis for the Advancing Out of School Learning in Mathematics and Engineering Project" Asilomar Conference on Signals, Systems, and Computers , 2016 , p.571
Esakki, G., Jatla, V., and Pattichis, M. "Adaptive High Efficiency Video Coding Based on Camera Activity Classification" 2017 Data Compression Conference , 2017 , p.438
Esakki, G., Jatla, V., and Pattichis, M.S.. "Optimal HEVC Encoding Based on GOPConfigurations" IEEE Southwest Symposium on Image Analysis and Interpretation , 2016 , p.25 http://dx.doi.org/10.1109/SSIAI.2016.7459166
Jiang, Y., Zong, C., Pattichis, M.S.. "Scalable HEVC Intra Frame Complexity Control Subject to Quality and Bitrate Constraints" 3rd IEEE Global Conference on Signal & Information Processing , 2015 , p.290
Llamocca, D. "Self-Reconfigurable Architectures for HEVC Direct and Inverse Transform" Journal of Parallel and Distributed Computing , v.109 , 2017 , p.178 https://doi.org/10.1016/j.jpdc.2017.05.017
Llamocca, D., and Dean B.K.. "A scalable pipelined architecture for biomimetic vision sensors" 2015 25th International Conference on Field Programmable Logic and Applications (FPL) , 2015 http://dx.doi.org/10.1109/FPL.2015.7293935
(Showing: 1 - 10 of 13)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Digital videos are everywhere. Digital video communications dominate internet traffic. Digital image and video analysis has become an essential component to many industries. Current applications include applications to self-driving cars and computer aided diagnosis of human diseases. More importantly, there is a strong need to develop the computing technologies to support effective video communications and the wider adoption of video analysis technologies.

The DRASTIC project developed effective computing architectures and software models for digital video processing and communications. A distinguishing characteristic of the DRASTIC family of architectures is that they are optimal as measured in terms of power/energy and performance. In other words, DRASTIC allows us to select the best possible architecture by balancing requirements on power/energy and performance. As an example, if more power/energy associated with more hardware resources is available, DRASTIC will provide an architecture that will yield a better performance. DRASTIC showed that a large number of hardware-software configurations can be summarized with simple mathematical equations that can be used to provide real-time adaptation to different needs.

There are significant practical implications associated with the DRASTIC project outcomes. For example, for real-time video communications using mobile devices, DRASTIC can select an architecture that minimizes energy consumption while still compressing videos in real-time. DRASTIC demonstrated a critical application of the technology in stroke ultrasound video communications where the mathematical models allow us to adapt video encoding to different network constraints without sacrificing clinical diagnostic quality. The results imply that DRASTIC can enable effective communications of stroke ultrasound videos from a fast moving ambulance during emergency events.

For image and video analysis applications, the DRASTIC project developed a new family of filtering architectures based on separable approximations and the Discrete Periodic Radon Transform (DPRT). As an example, DRASTIC provides efficient hardware architectures for object tracking using cross-correlation. Similarly, DRASTIC provides efficient hardware architectures for filter bank-based feature extraction. Overall, given the fact that the most time-consuming image analysis methods are made of 2D cross-correlations and convolutions, DRASTIC has provided methods that can significantly speed up most image and video analysis systems.

The broader impacts of the project include the development of new courses, teaching middle-school students from under-represented groups, conference presentations, and the support of several undergraduate and graduate students. DRASTIC supported the Advancing Out-of-School Learning in Mathematics and Engineering (AOLME) project, an after-school program for an urban and a rural middle-school. In level I, AOLME taught the basics of Computer Architecture and Python programming using the Raspberry Pi, how to represent numbers using binary representations and hexadecimals, and how to create complex images and videos using cartesian coordinate systems. In level II, AOLME focused on image transformations, sprites, histograms, and the traveling salesman person problem. The AOLME project impacted 44 middle-school students, 16 undergraduate and graduate student facilitators, and 4 teachers. DRASTIC also supported the research of two Ph.D. students and two M.Sc. thesis students.

To further disseminate the research, the project supported the creation of detailed tutorials at Oakland University and the University of New Mexico. DRASTIC developed openly available tutorials on dynamic partial reconfiguration and how to implement digital image processing cores using modern devices. For graduate courses in image and video processing, DRASTIC provided an overview of the mathematical models and the hardware and software implementations of the image and video processing and communications methods.

The DRASTIC research findings were disseminated in high-quality publications. The project resulted in journal publications in the IEEE Transactions on Image Processing, ACM Transactions on Reconfigurable Technology and Systems (TRETS), the IEEE Journal of Biomedical and Health Informatics, and the Journal of Real-time Image Processing. The research was also presented in the Data Compression Conference, IEEE EMBS, IEEE SSIAI, GlobalSip, IEEE ICASSP and published in the Conference proceedings. The lessons learned from teaching middle-school students was published in chapter of the book on Access and equity: Promoting high-quality Mathematics in grades 6-8, published by the National Council of Teachers of Mathematics.

Lastly, current efforts are focused on commercializing DRASTIC technology. There are currently three pending patents associated with the video communications research, the DPRT, and the hardware cores for fast 2D convolutions and cross-correlations. The PI co-founded a startup, ClearStream Technologies, that has currently licensed the pending patent on video communications. ClearStream Technologies was a USA finalist for the 2017 Creative Business Cup and is currently looking for funding and partnerships to commercialize the technology.

 

 


Last Modified: 12/27/2017
Modified by: Marios S Pattichis

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page