Award Abstract # 1446631
CPS: Synergy: Doing More With Less: Cost-Effective Infrastructure for Automotive Vision Capabilities

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
Initial Amendment Date: September 9, 2014
Latest Amendment Date: August 10, 2015
Award Number: 1446631
Award Instrument: Standard Grant
Program Manager: Sankar Basu
sabasu@nsf.gov
 (703)292-7843
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: January 1, 2015
End Date: December 31, 2018 (Estimated)
Total Intended Award Amount: $1,000,000.00
Total Awarded Amount to Date: $1,046,850.00
Funds Obligated to Date: FY 2014 = $1,000,000.00
FY 2015 = $46,850.00
History of Investigator:
  • James Anderson (Principal Investigator)
    anderson@cs.unc.edu
  • Shige Wang (Co-Principal Investigator)
  • Alexander Berg (Co-Principal Investigator)
  • Sanjoy Baruah (Co-Principal Investigator)
Recipient Sponsored Research Office: University of North Carolina at Chapel Hill
104 AIRPORT DR STE 2200
CHAPEL HILL
NC  US  27599-5023
(919)966-3411
Sponsor Congressional District: 04
Primary Place of Performance: University of North Carolina at Chapel Hill
201 S. Columbia St.
Chapel Hill
NC  US  27599-3175
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): D3LHU66KBLD5
Parent UEI: D3LHU66KBLD5
NSF Program(s): CPS-Cyber-Physical Systems
Primary Program Source: 01001415DB NSF RESEARCH & RELATED ACTIVIT
01001516RB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 8235, 8237
Program Element Code(s): 791800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Many safety-critical cyber-physical systems rely on advanced sensing capabilities to react to changing environmental conditions. One such domain is automotive systems. In this domain, a proliferation of advanced sensor technology is being fueled by an expanding range of autonomous capabilities (blind spot warnings, automatic lane-keeping, etc.). The limit of this expansion is full autonomy, which has been demonstrated in various one-off prototypes, but at the expensive of significant hardware over-provisioning that is not tenable for a consumer product. To enable features approaching full autonomy in a commercial vehicle, software infrastructure will be required that enables multiple sensor-processing streams to be multiplexed onto a common hardware platform at reasonable cost. This project is directed at the development of such infrastructure.

The desired infrastructure will be developed by focusing on a particularly compelling challenge problem: enabling cost-effective driver-assist and autonomous-control automotive features that utilize vision-based sensing through cameras. This problem will be studied by (i) examining numerous multicore-based hardware configurations at various fixed price points based on realistic automotive use cases, and by (ii) characterizing the range of vision-based workloads that can be feasibly supported using the software infrastructure to be developed. The research to be conducted will be a collaboration involving academic researchers at UNC and engineers at General Motors Research. The collaborative nature of this effort increases the likelihood that the results obtained will have real impact in the U.S. automotive industry. Additionally, this project is expected to produce new open-source software and tools, new course content, public outreach through participation in UNC's demo program, and lectures and seminars by the investigators at national and international forums.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 51)
Abhishek Singh and Sanjoy Baruah "Global EDF scheduling of multiple independent synchronous dataflow graphs" Proceedings of the IEEE Real-Time Systems Symposium (RTSS 2017) , 2017 , p.307-318
Abhishek Singh, Pontus Ekberg, and Sanjoy Baruah "Uniprocessor scheduling of real-time synchronous dataflow tasks." Real-Time Systems , 2018
Alan Burns and Sanjoy Baruah. "Migrating Mixed Criticality Tasks within a Cyclic Executive Framework" Proceedings of the International Conference on Reliable Software Technologies (Ada-Europe), , 2017
Alessandro Papadopoulos, Enrico Bini, Sanjoy Baruah and Alan Burns. "AdaptMC: A Control-Theoretic Approach for Achieving Resilience in Mixed-Criticality Systems." Proceedings of the EuroMicro Conference on Real-Time Systems (ECRTS 2018) , 2018
Calvin Deutschbein, Tom Fleming, Alan Burns and Sanjoy Baruah. "Multi-core cyclic executives for safety-critical systems." Proceedings of the International Symposium on Dependable Software Engineering: Theories, Tools and Applications (SETTA 2017) , 2017
C. Jarrett, B. Ward, and J. Anderson "A Contention-Sensitive Fine-Grained Locking Protocol for Multiprocessor Real-Time Systems" Proceedings of the 23rd International Conference on Real-Time Networks and Systems , 2015 , p.3
C. Nemitz, K. Yang, M. Yang, P. Ekberg, and J. Anderson "Multiprocessor Real-Time Locking Protocols for Replicated Resources" Proceedings of the 28th Euromicro Conference on Real-Time Systems , 2016 , p.50
C. Nemitz, T. Amert, and J. Anderson "Real-Time Multiprocessor Locks with Nesting: Optimizing the Common Case" Proceedings of the 25th International Conference on Real-Time Networks and Systems , 2017 , p.38
C. Nemitz, T. Amert, and J. Anderson "Using Lock Servers to Scale Real-Time Locking Protocols: Chasing Ever-Increasing Core Counts" Proceedings of the 30th Euromicro Conference on Real-Time Systems , 2018 , p.25:1
Eunbyung Park, Xufeng Han, Tamara L. Berg, Alexander C. Berg "Combining Multiple Sources of Knowledge in Deep CNNs for Action Recognition" IEEE Winter Conference on Applications of Computer Vision (WACV 2016) , 2016
G. Elliott, K. Yang, and J. Anderson "Supporting Real-Time Computer Vision Workloads using OpenVX on Multicore+GPU Platforms" Proceedings of the 36th IEEE Real-Time Systems Symposium , 2015 , p.273
(Showing: 1 - 10 of 51)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

In mass-market vehicles today, semi-autonomous features such as automatic lane keeping, adaptive cruise control, etc., are becoming common.  In the coming years, such features are expected to evolve to provide ever more sophisticated driver-assistance features.  The hoped-for culmination of this evolution is full autonomy, which will entail endowing automobiles with "thinking capabilities" that enable them to react to complex situations in a timely fashion.  Introducing such capabilities in a cost-effective way in mass-market vehicles remains a lofty goal that will likely take many years to achieve.


At present, full autonomy has been realized only in one-off prototype vehicles.  The most press-worthy example of such a vehicle is the Google Car.  In these one-off prototypes, autonomy is achieved by equipping the vehicle with a number of computers and sensing devices, at considerable monetary expense.  For example, the computing and sensing infrastructure in (at least one version of) the Google Car reportedly cost over $150,000.  While this is not a significant expense for Google, it certainly would be for a typical consumer.


This project was directed at the development of computational infrastructure for realizing autonomous features in vehicles at monetary cost levels that are acceptable for mass-market vehicles.  The specific focus of the project was to support real-time computer-vision programs that use cameras as sensors.  Cameras are relatively cheap and are commonly used in mass-market vehicles today to provide semi-autonomous features such as automatic lane keeping and adaptive cruise control.  Most of the challenge problems investigated in the project pertained to scenarios where multiple computer-vision programs, corresponding to multiple image streams from multiple cameras, were executed on a common hardware platform.  Using a single hardware platform is much more economical than devoting separate hardware to each stream, as done in many expensive prototypes.  The various hardware platforms that were considered were all multicore platforms that use graphics processing units (GPUs) to accelerate mathematical computations that are common in autonomous driving.  A multicore platform has several processing "cores" that can execute different programs, or parts of the same program, in parallel.


The main intellectual contributions of this project were twofold.  First, a range of new methods was developed that enable computer-vision programs to exploit the significant parallelism available on multicore+GPU platforms.  Second, new analytical results were produced that enable response-time bounds for computer-vision programs to be certified.  This analysis might be used, for example, to certify that any obstacle in the road is detected in enough time to ensure that the vehicle has time to respond appropriately.


In terms of broader impacts, the investigators presented talks on this work at numerous institutions, conferences, workshops, etc.  Additionally, the results of this project formed the basis of the Ph.D. dissertations of three graduate students.  Three undergraduate honors theses were also produced under this project.  Some of the results from the project were also applied in automotive systems at General Motors as part of summer internship positions undertaken by one of the supported graduate students.  A small-scale autonomous car was also developed and exhibited at various open-house demo events at UNC.

 


Last Modified: 02/12/2019
Modified by: James H Anderson

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page