Hello! My name is Wonbong — Won is also fine with me. I'm interested in understanding and reconstructing 3D from images using methods such as NeRF, DUSt3R, Transformers, and Diffusion Models.
Recently, I am specifically interested in Diffusion model + 3D.
I will be attending CVPR 2025—please feel free to get in touch (send me an email or DM on LinkedIn) so we can arrange to meet during the event.
If you come to CVPR 2025, I warmly invite you to visit my poster session (#84) in Exhibition Hall D on Friday morning, June 13, 2025.
I am currently a final-year PhD student in CS at University College London, supervised by Prof Lourdes Agapito.
Also, I recently started to work as a Postdoctoral Researcher for Monetization + Gen AI at Meta London.
Happy to grab a cup of coffee in London.
Pow3R: Empowering Unconstrained 3D Reconstruction with Camera and Scene Priors Wonbong Jang, Philippe Weinzaepfel, Vincent Leroy, Lourdes Agapito, Jerome Revaud CVPR, 2025 Project Webpage /
Arxiv /
Poster PDF /
Poster Session Info
DUSt3R generates 3D pointmaps from regular images without requiring camera poses. In practice, significant effort is put into camera calibration or deploying additional sensors to acquire point clouds. We present Pow3R, a single network capable of processing any subset of this auxiliary information. By incorporating priors, our method achieves more accurate and precise 3D reconstructions, multi-view depth estimation, and camera pose predictions. This approach opens new possibilities, such as processing images at native resolution and performing depth completion. Additionally, Pow3R generates pointmaps in two distinct coordinate systems, enabling the model to compute camera poses more quickly and accurately.
NViST turns in-the-wild single images into implicit 3D functions with a single pass using Transformers. Extending CodeNeRF to multiple real-world scenes, Feed-forward model, Transformers.