Juan Raúl Padrón Griffe

About me
I am a Marie Sklodowska-Curie Fellow of the EU Project PRIME and a PhD candidate at the Graphics and Imaging Lab. My PhD thesis under the supervision of Prof. Adolfo Muñoz and Prof. Adrian Jarabo focuses on physically-based rendering and appearance modeling of multi-scale materials, such as biological tissues (skin, scales and feathers) and intricate human-made objects (cosmetics). Previously, I earned my Bachelor of Sciene degree in Computer Science at the Universidad Central de Venezuela, where I specialized in computer graphics and imaging processing. My undergraduate thesis explored the generation and visualization of procedural terrains. Later, I received my Master of Science degree in Informatics at the Technical University of Munich, concentrating on computer graphics, computer vision and machine learning. During my Master studies, I conducted research on 3D Scanning and Neural Rendering for object and face relighting advised by Dr. Justus Thies. Beyond my academic experience, I have two years of software development experience in backend technologies (.NET, Service Stack, Java, Spring).

Looking for Opportunities
I recently submitted my Ph.D. dissertation under the title “Modeling and Rendering of Multiscale Materials”. I am currently seeking both postdoctoral and industry opportunities where I can apply my expertise in computer graphics, computer vision and artificial intelligence for the digital acquisition, representation and understanding of the visual world. My combined expertise in computer graphics, computer vision, machine learning, and software engineering allows me to tackle complex technical challenges from both a research and implementation perspective, If you're interested in collaboration or have an opportunity that aligns with my expertise, please feel free to reach out!

Projects

Face Relighting In The Wild

2020, Jul 31    

Relighting plays an essential role in realistically transferring objects from a captured environment into another one. In particular, current applications like telepresence need to relit faces consistently with the illumination conditions of the target environment to offer an authentic immersive experience. Traditional physically-based methods for portrait relighting rely on an intrinsic image decomposition step, which requires to solve a challenging inverse rendering problem in order to obtain the underlying face geometry, reflectance material and lighting. Inaccurate estimation of these components usually leads to strong artifacts (e.g. artificial highlights or ghost effects) in the subsequent relighting step. In recent years, several deep learning architectures have been proposed to address this limitation. However, none of them are free from these artifacts.

In my Master thesis, we propose a general framework for automatic relighting enhancement using the StyleGAN generator as a photorealistic portrait prior. Specifically, we apply the ratio image-based face relighting to an artificial portrait dataset generated using the StyleGAN model. Next, we refine this dataset by projecting back the relit samples into the StyleGAN space. Then, we train an autoencoder network to relit portrait images from a source portrait image and a target spherical harmonic lighting. We evaluate the proposed method on our synthetic dataset, the Laval face and lighting dataset and the Multi-PIE dataset both qualitatively and quantitatively. Our experiments prove that this method can enhance the state of the art single portrait relighting algorithm for synthetic datasets. Unlike this algorithm, we achieve these results relying on a synthetic dataset five times smaller employing a traditional training scheme.

Results:

Obama Lights Obama Relighting

Advisor: Justus Thies Supervisor: Matthias Niessner

Github repository Document