Juan Raúl Padrón Griffe

About me
I am a Marie Sklodowska-Curie Fellow of the EU Project PRIME and a PhD candidate at the Graphics and Imaging Lab. My PhD thesis under the supervision of Prof. Adolfo Muñoz and Prof. Adrian Jarabo focuses on physically-based rendering and appearance modeling of multi-scale materials, such as biological tissues (skin, scales and feathers) and intricate human-made objects (cosmetics). Previously, I earned my Bachelor of Sciene degree in Computer Science at the Universidad Central de Venezuela, where I specialized in computer graphics and imaging processing. My undergraduate thesis explored the generation and visualization of procedural terrains. Later, I received my Master of Science degree in Informatics at the Technical University of Munich, concentrating on computer graphics, computer vision and machine learning. During my Master studies, I conducted research on 3D Scanning and Neural Rendering for object and face relighting advised by Dr. Justus Thies. Beyond my academic experience, I have two years of software development experience in backend technologies (.NET, Service Stack, Java, Spring).

Looking for Opportunities
I recently submitted my Ph.D. dissertation under the title “Modeling and Rendering of Multiscale Materials”. I am currently seeking both postdoctoral and industry opportunities where I can apply my expertise in computer graphics, computer vision and artificial intelligence for the digital acquisition, representation and understanding of the visual world. My combined expertise in computer graphics, computer vision, machine learning, and software engineering allows me to tackle complex technical challenges from both a research and implementation perspective, If you're interested in collaboration or have an opportunity that aligns with my expertise, please feel free to reach out!

Projects

Game Capture

2018, Sep 28    

The main goal of the Hiwi project at the chair of Remote Sensing Technology at the Technical University of Munich is collecting potentially useful information from video games in order to train computer vision models for autonomous driving applications. The project is inspired in the seminal publication Playing for Data, where the authors show that acquired data from video games supplemented with real-world images significantly increases the accuracy of deep learning models for the semantic segmentation task. In addition, the acquisition pipeline reduces the amount of hand-labeled real-world data. We develop a prototype in C++ to collect in realtime the DirectX Frame Buffers (RGB, Stencils, Albedo, Irradiance, Specular, Normal) from the video game Grand Theft Auto V. Finally, we could also extract further internal game states (e.g. time of day, location, vehicle speed, etc) using the Script Hook V mod.

In the second phase, we tried to capture data from other games based on the Free Supervision from Video Games paper. The GameHook library wraps DirectX 11 to intercept and modify the rendering code of a particular game, including the injection of code into the vertex or pixel shaders. Unfortunately, the library failed to hook games like Project Cars 2 or Sebastian Loeb Rally.

Results:

Disparity Map Normal Map Instance Segmentation

Albedo Map Specular Map Irradiance Map

Advisors: Sandra Aigner, Lukas Liebel
Supervisor: Marco Körner

If you want to know more about the project, please contact Lukas Liebel and Sandra Aigner. They are really nice advisors and they are working in interesting projects. I would also suggest you to read this amazing post GTA V - Graphics Study and play with the amazing RenderDoc tool.