Award Abstract # 0746117
CAREER: Intuitive Appearance Design

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: TRUSTEES OF DARTMOUTH COLLEGE
Initial Amendment Date: January 8, 2008
Latest Amendment Date: December 12, 2013
Award Number: 0746117
Award Instrument: Continuing Grant
Program Manager: Nina Amla
namla@nsf.gov
 (703)292-7991
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: July 1, 2008
End Date: June 30, 2014 (Estimated)
Total Intended Award Amount: $399,999.00
Total Awarded Amount to Date: $302,693.00
Funds Obligated to Date: FY 2008 = $59,602.00
FY 2009 = $59,254.00

FY 2010 = $90,152.00

FY 2011 = $93,663.00

FY 2012 = $21.00
History of Investigator:
  • Fabio Pellacini (Principal Investigator)
    fabio@cs.dartmouth.edu
Recipient Sponsored Research Office: Dartmouth College
7 LEBANON ST
HANOVER
NH  US  03755-2170
(603)646-3007
Sponsor Congressional District: 02
Primary Place of Performance: Dartmouth College
7 LEBANON ST
HANOVER
NH  US  03755-2170
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): EB8ASJBCFER9
Parent UEI: T4MWFG59C6R3
NSF Program(s): COMPUTING PROCESSES & ARTIFACT
Primary Program Source: 01000809DB NSF RESEARCH & RELATED ACTIVIT
01000910DB NSF RESEARCH & RELATED ACTIVIT

01001011DB NSF RESEARCH & RELATED ACTIVIT

01001112DB NSF RESEARCH & RELATED ACTIVIT

01001213DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 1187, 9150, 9218, HPCC
Program Element Code(s): 735200
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

CAREER: Intuitive Appearance Design

F. Pellacini

Abstract

Synthetic images have reached considerable sophistication, to the point where we can render images indistinguishable from reality. Today the major limiting factor for a ubiquitous use of computer-generated images is the human labor and expertise required to create the shape, materials and lights of synthetic environments. This project is a combined research and education effort that brings us closer to making the creation the synthetic imagery accessible to all. The research component of this project simplifies the design of objects' appearance, which comes from the interaction of materials and lights, to complement recent advances in shape modeling and animation. The goal is to allow users, including novices, to design the appearance of complex scenes in just minutes. On the education side, the project explores the interaction between the conceptual, technical, and aesthetic principles of image synthesis through curriculum development and out-of-classroom experiences.

More specifically, the project investigates interfaces that allow designers to intuitively and effectively specify design goals on objects? appearance, algorithms that derive lights and materials parameters robustly from such goals, and representations of appearance that are effective to manipulate. These investigations allow designers to manipulate complex lighting and materials with intuitive user-interface metaphors and to transfer appearance from example images. Qualitative and quantitative user studies guide our investigation and serve as rigorous validation of our results. We focus on novice users, but expect our work to benefit experts as well. The resulting methods allow intuitive and fast modeling, while remaining consistent across all aspects of appearance design, from simple lighting and materials to complex environmental illumination and textured surfaces, in realistic and non-photorealistic renderings of static and dynamic scenes.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 21)
Xiaobo An and Fabio Pellacini "User Controllable Color transfer" Conputer Graphics Forum (EG) , 2009
Xiaobo An and Fabio Pellacini "User Controllable Color transfer" Conputer Graphics Forum (EG) , v.29(2) , 2009 , p.263
Wojciech Matusik, Boris Ajdin, Jinwei Gu, Jason Lawrence, Hendrik P. A. Lensch, Fabio Pellacini, and Szymon Rusinkiewicz. "Printing Spatially-Varying Reflectance" ACM Transaction on Graphics (SIGGRAPH Asia) , 2009
Wojciech Matusik, Boris Ajdin, Jinwei Gu, Jason Lawrence, Hendrik P. A. Lensch, Fabio Pellacini, and Szymon Rusinkiewicz. "Printing Spatially-Varying Reflectance" ACM Transaction on Graphics (SIGGRAPH Asia) , v.28(5) , 2009 , p.128:1
Fabio Pellacini "EnvyLight: An Interface for Editing Natural Illumination." ACM Transaction on Graphics (SIGGRAPH) , v.29(4) , 2010 , p.34:1
Jiawei Ou, Fabio Pellacini "LightSlice: matrix slice sampling for the many-lights problem" ACM Transactions on Graphics (SIGGRAPH Asia 2011) , v.30 (6) , 2011 , p.179:1 10.1145/2024156.2024213
Jiawei Ou, Fabio Pellacini "SafeGI: Type Checking to Improve Correctness in Rendering System Implementation" Computer Graphics Forum , v.29(4) , 2010 , p.1269
Jonathan Denning, William B. Kerr, Fabio Pellacini "MeshFlow: Interactive Visualization of Mesh Construction Sequences" ACM Transactions on Graphics (SIGGRAPH 2011) , v.30 (4) , 2011 10.1145/1964921.1964961
J. Ou, O. Karlík, J. K?ivánek, and F. Pellacini "Toward Evaluating Progressive Rendering Methods in Appearance Design Tasks." IEEE Computer Graphics and Applications , 2012
Juraj Obert, Fabio Pellacini, Sumanta Pattanaik "Visibility Editing For All-Frequency Shadow Design" Computer Graphics Forum , v.29(4) , 2010 , p.1441
O. Karlík, M. R??i?ka, V. Gassenbauer, F. Pellacini, and J. K?ivánek "Toward Evaluating the Usefulness of Global Illumination for Novices in Lighting Design Tasks." IEEE Transactions on Visualization and Computer Graphics , 2012
(Showing: 1 - 10 of 21)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The use of synthetic images is ubiquitous for many applications, from engineering to fine arts, but it is often limited by the human labor and expertise required to create synthetic environments. The goal of this project was to investigate methods that simplify the design of objects' appearance, which comes from the interaction of surface materials and scene lighting, through the development of interactive rendering algorithms and intuitive appearance design interfaces.

 The first main finding of our work is that novices can perform appearance design tasks without necessary training, if supported by the user interface. This dispels an old believe in the graphics and design communities that only trained artists can effectively perform design tasks.

 The fundamental concept that we have introduced is that the most effective paradigm to edit complex natural materials and illumination is a select-and-modify formulation where selection is the most crucial part and should be supported well by the interface. This is not trivial since in 3D graphics and with natural appearance data, selection boils down to solving complex non-linear optimization problems that are computationally intensive to solve and hard to model precisely. Nonetheless, this project has shown that this is not only possible, but that very effective solution can be developed.

 We have validated this idea by introducing a new user study methodology geared toward measuring design tasks with statistical certainty. Our methodology works by performing both matching tasks, to measure the accuracy and speed that artists have when performing simple tasks, and open tasks, to measure the ability of artists to explore the design space.

 Through the project we also found the need to provide interactive feedback to artists, without which some editing tasks cannot be performed efficiently. To do so, we have introduce new computational methods for complex appearance.

 The second main finding of our work is that the combination of interactive rendering algorithms and accurate selection for complex appearance lets artists work well with direct interface rather than using indirect algorithms that match final appearance by optimization.

 This work supported in this grant has been published in main conferences and journal in our field and has seen some use in industry. We believe that the main applications that will benefit from our work are the entertainment and design industries. This funding for this project have partially supported three PhD students, all of which have started successful careers in the movie industry and academia.


Last Modified: 03/02/2016
Modified by: Fabio Pellacini

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page