
NSF Org: |
CNS Division Of Computer and Network Systems |
Recipient: |
|
Initial Amendment Date: | September 9, 2014 |
Latest Amendment Date: | August 10, 2015 |
Award Number: | 1446631 |
Award Instrument: | Standard Grant |
Program Manager: |
Sankar Basu
sabasu@nsf.gov (703)292-7843 CNS Division Of Computer and Network Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | January 1, 2015 |
End Date: | December 31, 2018 (Estimated) |
Total Intended Award Amount: | $1,000,000.00 |
Total Awarded Amount to Date: | $1,046,850.00 |
Funds Obligated to Date: |
FY 2015 = $46,850.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
104 AIRPORT DR STE 2200 CHAPEL HILL NC US 27599-5023 (919)966-3411 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
201 S. Columbia St. Chapel Hill NC US 27599-3175 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | CPS-Cyber-Physical Systems |
Primary Program Source: |
01001516RB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Many safety-critical cyber-physical systems rely on advanced sensing capabilities to react to changing environmental conditions. One such domain is automotive systems. In this domain, a proliferation of advanced sensor technology is being fueled by an expanding range of autonomous capabilities (blind spot warnings, automatic lane-keeping, etc.). The limit of this expansion is full autonomy, which has been demonstrated in various one-off prototypes, but at the expensive of significant hardware over-provisioning that is not tenable for a consumer product. To enable features approaching full autonomy in a commercial vehicle, software infrastructure will be required that enables multiple sensor-processing streams to be multiplexed onto a common hardware platform at reasonable cost. This project is directed at the development of such infrastructure.
The desired infrastructure will be developed by focusing on a particularly compelling challenge problem: enabling cost-effective driver-assist and autonomous-control automotive features that utilize vision-based sensing through cameras. This problem will be studied by (i) examining numerous multicore-based hardware configurations at various fixed price points based on realistic automotive use cases, and by (ii) characterizing the range of vision-based workloads that can be feasibly supported using the software infrastructure to be developed. The research to be conducted will be a collaboration involving academic researchers at UNC and engineers at General Motors Research. The collaborative nature of this effort increases the likelihood that the results obtained will have real impact in the U.S. automotive industry. Additionally, this project is expected to produce new open-source software and tools, new course content, public outreach through participation in UNC's demo program, and lectures and seminars by the investigators at national and international forums.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
In mass-market vehicles today, semi-autonomous features such as automatic lane keeping, adaptive cruise control, etc., are becoming common. In the coming years, such features are expected to evolve to provide ever more sophisticated driver-assistance features. The hoped-for culmination of this evolution is full autonomy, which will entail endowing automobiles with "thinking capabilities" that enable them to react to complex situations in a timely fashion. Introducing such capabilities in a cost-effective way in mass-market vehicles remains a lofty goal that will likely take many years to achieve.
At present, full autonomy has been realized only in one-off prototype vehicles. The most press-worthy example of such a vehicle is the Google Car. In these one-off prototypes, autonomy is achieved by equipping the vehicle with a number of computers and sensing devices, at considerable monetary expense. For example, the computing and sensing infrastructure in (at least one version of) the Google Car reportedly cost over $150,000. While this is not a significant expense for Google, it certainly would be for a typical consumer.
This project was directed at the development of computational infrastructure for realizing autonomous features in vehicles at monetary cost levels that are acceptable for mass-market vehicles. The specific focus of the project was to support real-time computer-vision programs that use cameras as sensors. Cameras are relatively cheap and are commonly used in mass-market vehicles today to provide semi-autonomous features such as automatic lane keeping and adaptive cruise control. Most of the challenge problems investigated in the project pertained to scenarios where multiple computer-vision programs, corresponding to multiple image streams from multiple cameras, were executed on a common hardware platform. Using a single hardware platform is much more economical than devoting separate hardware to each stream, as done in many expensive prototypes. The various hardware platforms that were considered were all multicore platforms that use graphics processing units (GPUs) to accelerate mathematical computations that are common in autonomous driving. A multicore platform has several processing "cores" that can execute different programs, or parts of the same program, in parallel.
The main intellectual contributions of this project were twofold. First, a range of new methods was developed that enable computer-vision programs to exploit the significant parallelism available on multicore+GPU platforms. Second, new analytical results were produced that enable response-time bounds for computer-vision programs to be certified. This analysis might be used, for example, to certify that any obstacle in the road is detected in enough time to ensure that the vehicle has time to respond appropriately.
In terms of broader impacts, the investigators presented talks on this work at numerous institutions, conferences, workshops, etc. Additionally, the results of this project formed the basis of the Ph.D. dissertations of three graduate students. Three undergraduate honors theses were also produced under this project. Some of the results from the project were also applied in automotive systems at General Motors as part of summer internship positions undertaken by one of the supported graduate students. A small-scale autonomous car was also developed and exhibited at various open-house demo events at UNC.
Last Modified: 02/12/2019
Modified by: James H Anderson
Please report errors in award information by writing to: awardsearch@nsf.gov.