Tuesday, April 20, 2021 – 1:00pm to 2:00pm
Virtual Presentation – ET Remote Access – Zoom
ZHENGQIN LI, Ph.D. Student https://sites.google.com/a/eng.ucsd.edu/zhengqinli
Physically-motivated Deep Inverse Rendering with Complex Materials and Lighting
Image formation is a complex process where lighting, geometry and materials interact with each other to determine appearance. Inverse rendering seeks to recover those factors from one or multiple images. This is a fundamental challenge in computer vision. Conventional methods do not suffice for this challenge because of its highly ill-posed nature and even recent deep networks typically cannot handle the complexity of the problem with limited data. Our goal is to solve the inverse rendering problem by developing physically-motivated deep networks that only need light-weight acquisition systems to handle phenomena such as complex material, spatially-varying lighting, interreflections, hard and soft shadows and transparencies.
We achieve our goal through three major contributions: novel neural modules that incorporate domain knowledge of image formation, new representations that are parsimonious and leverage physical insights to enhance interpretability and generalization ability, and novel large-scale datasets rendered with a high degree of photorealism that generalize well to real scenes. The true impact of our research is to turn cheap mobile phones into high-quality, accessible augmented reality devices that can be used for photorealistic object insertion, material replacement and light editing.
Zhengqin Liis a 5th year PhD student from UC San Diego, supervised by Prof. Manmohan Chandaraker and supported by Qualcomm and Powell fellowships. My research interest is on solving challenging inverse rendering problems, to recover complex material, lighting and geometry under an unconstrained environment.
Zoom Participation. See announcement.