Award Abstract # 1925157
NRI: FND: Learning Visual Dynamics from Interaction

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK
Initial Amendment Date: September 9, 2019
Latest Amendment Date: September 9, 2019
Award Number: 1925157
Award Instrument: Standard Grant
Program Manager: Cang Ye
cye@nsf.gov
 (703)292-4702
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: October 1, 2019
End Date: September 30, 2024 (Estimated)
Total Intended Award Amount: $750,000.00
Total Awarded Amount to Date: $750,000.00
Funds Obligated to Date: FY 2019 = $750,000.00
History of Investigator:
  • Carl Vondrick (Principal Investigator)
    cv2428@columbia.edu
  • Hod Lipson (Co-Principal Investigator)
Recipient Sponsored Research Office: Columbia University
615 W 131ST ST
NEW YORK
NY  US  10027-7922
(212)854-6851
Sponsor Congressional District: 13
Primary Place of Performance: Columbia University
530 West 120th Street
New York
NY  US  10027-7922
Primary Place of Performance
Congressional District:
13
Unique Entity Identifier (UEI): F4N1QNPB95M4
Parent UEI:
NSF Program(s): NRI-National Robotics Initiati
Primary Program Source: 01001920DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 8086
Program Element Code(s): 801300
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This project studies robots that utilize the nearby and available physical objects to perform tasks, such as building a bridge out of miscellaneous rubble in a disaster area. Resourceful robots have the potential to enable many new applications in emergency response, healthcare, and manufacturing, which will improve the welfare, security, and efficiency of the overall population. The research investigates how the patterns between multiple senses, such as vision, sound, and touch, will help teach the robot to solve interaction tasks without needing a human teacher, which is expected to improve the flexibility and versatility of autonomous robots. The project will provide research and educational opportunities for both graduate and undergraduate students in computer science and mechanical engineering. Outcomes from this research will translate into new educational materials in computer vision, machine learning, and robotics.

This research investigates robots that interact with realistic environments in order to learn reusable representations for navigation and manipulation tasks. While there has been significant advancements leveraging machine learning for computer vision and robotics problems, a central challenge in both fields is generalizing to the realistic complexity and diversity of the physical world. Although simulation has proved instrumental in developing platforms for machine interaction, the unconstrained world is vast, making it computationally difficult to simulate. Instead, the investigators aim to capitalize on the inherent structure of physical environments through the natural synchronization of modalities and context to efficiently learn self-supervised representations and policies for interaction with unconstrained environments. The investigators also plan several evaluations to analyze the generalization capabilities of such algorithms.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Arjun Mani and Ishaan Preetam Chandratreya and Elliot Creager and Carl Vondrick and Richard Zemel "SurfsUp: Learning Fluid Simulation for Novel Surfaces" International Conference on Computer Vision , 2023 Citation Details
Boyuan Chen, Yuhang Hu "Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models" International Conference on Robot Automation , 2021 Citation Details
Boyuan Chen, Yu Li "Beyond Categorical Label Representations for Image Classification" International Conference on Learning Representations , 2021 Citation Details
Chen, Boyuan and Kwiatkowski, Robert and Vondrick, Carl and Lipson, Hod "Fully body visual self-modeling of robot morphologies" Science Robotics , v.7 , 2022 https://doi.org/10.1126/scirobotics.abn1944 Citation Details
Chen, Boyuan and Song, Shuran and Lipson, Hod and Vondrick, Carl "Visual Hide and Seek" The 2020 Conference on Artificial Life , 2020 10.1162/isal_a_00269 Citation Details
Dídac Surís, Ruoshi Liu "Learning the Predictability of the Future" Computer Vision and Pattern Recognition , 2021 Citation Details
Mia Chiquier and Carl Vondrick "Muscles in Action" International Conference on Computer Vision , 2023 Citation Details
Ruilin Xu, Rundi Wu "Listening to Sounds of Silence for Speech Denoising" Advances in Neural Information Processing Systems , 2021 Citation Details
Ruoshi Liu and Chengzhi Mao and Purva Tendulkar and Hao Wang and Carl Vondrick "SurfsUp: Learning Fluid Simulation for Novel Surfaces" International Conference of Computer Vision , 2023 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This project demonstrated how robots can learn about their environment through direct physical interaction rather than relying on pre-programmed simulations. The project developed new approaches that allow robots to gain understanding of physical properties and dynamics through hands-on experimentation, similar to how humans learn by interacting with the world around them. A major achievement of this research was the development of PaperBot, an innovative system that learns to design and optimize tools made from paper through real-world trial and error. Unlike traditional approaches that depend heavily on computer simulation, PaperBot learns directly from physical experiments using vision systems and force sensors to evaluate its designs. This represents an important advance in robotics, showing that robots can effectively learn complex physical properties through direct experience rather than theoretical models. The research produced results in two challenging test cases. First, PaperBot learned to design and fold paper airplanes that could fly further than human-designed versions after just 100 trials, mastering complex aerodynamic principles through experimentation. Second, the system created paper-based grippers capable of carefully handling delicate objects like fruit, demonstrating practical applications for fields like food processing and medical device handling.

The broader impacts of this research extend well beyond robotics. The project advances sustainable manufacturing by showing how recyclable materials like paper can be transformed into functional tools. The development of low-cost, customizable paper-based tools could improve healthcare accessibility, particularly in resource-limited settings. The research also promotes more accessible technological development by using readily available materials and sharing findings openly. These advances lay the groundwork for more adaptable and resourceful robotic systems that can learn from their environment and create custom solutions for specific tasks. This capability could transform various fields, from manufacturing and healthcare to environmental sustainability, while promoting more accessible and sustainable technological development.

 


Last Modified: 02/03/2025
Modified by: Carl M Vondrick

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page