
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | August 26, 2011 |
Latest Amendment Date: | August 26, 2011 |
Award Number: | 1149001 |
Award Instrument: | Standard Grant |
Program Manager: |
Ephraim Glinert
IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | January 1, 2012 |
End Date: | December 31, 2014 (Estimated) |
Total Intended Award Amount: | $124,644.00 |
Total Awarded Amount to Date: | $124,644.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
3227 CHEADLE HALL SANTA BARBARA CA US 93106-0001 (805)893-4188 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
CA US 93106-2050 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
HCC-Human-Centered Computing, GRAPHICS & VISUALIZATION |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This project enhances the research direction of the interdisciplinary arts-engineering Media Arts & Technology (MAT) doctoral program at UC Santa Barbara, through the introduction of a practice-based, study in visually applied robotics. The ultimate challenge in any arts, science and engineering collaboration is a convincing articulation as to why and how artistic contribution can be of benefit to the scientific and engineering research. We propose to develop a prototype multi-camera instrument consisting of three robotically actuated cameras for experiments in content recognition and image stabilization of real-time noisy images generated by the moving cameras which try to synchronize their visual content. Our project?s purpose is to develop new solutions in image synchronization through the study of images generated by machine behavior, bridging knowledge perspectives from visualization experts from the two fields of arts and engineering.
Through the iterative working process of experimental staging and evaluation of results, we plan to identify and formally define methodology similarities and differences by which artists and engineers arrive at solutions. We are therefore interested in closely examining problem-solving at the implicit and explicit levels. Our project?s principal objectives are to: a) To develop an instrument of an exploratory, experimental nature that will be used in multiple ways to stimulate advances in research dealing with the optical-mechanical robotic vision machine; the study of machine behavior, and the study of machine generated images. b) Advance the infrastructure for research and education through the creation of an imaging instrument that will bring together specialists from different disciplines to explore a common computational imaging problem and lead to further arts-engineering collaboration, and c) the intent is to position artistic vision as a contributing force to advancing research and thereby to push recognition of the artistic paradigm as of relevance to the research community. A part of this work will therefore be to identify opportunities that current scientific/engineering research have not explored but that have potential and resonances for both disciplines.
The project will have significant educational impact as imaging-computational-robotics is a curriculum and research direction that MAT has desired to integrate for years, and the project will formally set the stage for direct engagement with the controls mechanical engineering branch of the College of Engineering. Dissemination of results will occur in a broader spectrum then conventional research such as engineering conference papers and arts academic presentations. This proposal also includes the creation of a state-of-the-art installation based on the result of the experimental studies, to be circulated in the general public context of exhibitions in museums, where feedback can be further collected.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Major goals of the research have been to develop an imaging instrument of an exploratory, experimental nature, with the intent of providing an infrastructure for both engineering and artistic exploration. The system consisting of 3 cameras on individual rails each of which follows a specific rule of human photographic vision are translated into machine language. The instrument is being used in multiple ways, serving as an ongoing research test bed, bringing together specialists from different disciplines and ideally stimulating advances in engineering research and further arts-engineering collaboration.
As an engineering project, development of the instrument required collaboration of artists, controls engineers and computer vision researchers. Interviews with engineering researchers suggest that experimentation with the instrument may inspire new research directions in computer vision, machine learning, swarm robotics, remote collaboration and visualization.
We have developed a project "Swarm Vision" by which to present the research in public venues either in engineering or art museum contexts.
Swarm Vision is a multimedia installation consisting of automated pan/tilt/zoom cameras driven by custom software. Each camera is programmed with a computer vision algorithm in order to "seek" a specific visual feature in the environment. Each camera also contains a memory of its visual environment, containing information on the presence of "interesting" features, and allowing the robot to keep an up-to-date database of the scene. The three robots in the system each communicate with each other: when one robot finds a particularly strong presence of the visual feature it is programmed to seek, it informs the other robots of the location of its gaze and they all turn to look at the same place, collaboratively imaging the scene. Two visualizations are presented to the viewer: a graphic showing the real-time computer vision analysis being performed by each camera, and a 3D rendered reconstruction of the environment showing image frames in their approximate spatial location.
The Swarm Vision project, after two years of presentations and installations around the world, has evolved into new spinoff projects using robotic cameras and 3D displays for interactive art.
A new direction for the Swarm Vision project has been to utilize a new 3D stereoscopic display (separate from the UCSB Allosphere) for visualizations. The Samsung display is notable for its high screen resolution, 3D display, and relatively low price.
Since we began the Swarm Vision project, visitors all over the world have asked about using the cameras to track people's faces. After all, humans take pictures of themselves and of other people more than any other subject. We finally decided to try to create a work using this idea.
Face Cloud uses the robotic cameras and virtual scene reconstruction of Swarm Vision, but differs in what drives the system. Rather than seeking geometric or color features, the cameras seek human faces. And rather than visualizing every moment, only images with faces in them are saved to the virtual cloud. The result is the creation of a virtual space defined by, and populated with, faces of all types looking back at the viewer (including the viewer's own face, eventually).
Publications
Bazo, Pinter, and Legrady. "Swarm Vision: Autonomous Aesthetic Multi-Camera Interaction." Proceedings of the 21st ACM International Conference on Multimedia, Pages 737-740, Nov. 2013.
George Legrady and Marco Pinter and Danny Bazo. "Swarm Vision." Leonardo 46, no. 4 MIT Press (2013): 408-409.
ExpVisLab. "DRONE: The Automated Image - Le Mois de la Photo a Montreal, pp. 66-69. Kerber Verlag, 2013.
Conference Presentations
George Legrady. "Swarm Vis...
Please report errors in award information by writing to: awardsearch@nsf.gov.