In 2020, I had the opportunity to intern remotely at Adobe Research in San Jose, where I worked on primitives' fitting for large 3D point clouds.
Then, in 2022, I interned at Snap Inc. London, where my project involved 3D body mesh reconstruction from 2D images with improved 2D reprojection accuracy.
Currently, I am interning at Meta AI in London until December 2023.
We introduce StyleMorph, a 3D-aware generative model that disentangles 3D shape, camera pose, object appearance, and background appearance for high quality image synthesis.
We chain 3D morphable modelling with deferred neural rendering by performing an implicit surface rendering of âTemplate Object Coordinatesâ (TOCS).
We introduce Softmesh, a fully differentiable pipeline to transform a 3D point cloud into a probabilistic mesh representation that allows us to directly render 2D images.
We present Cascaded Primitive Fitting Networks (CPFN) that relies on an adaptive patch sampling network to assemble detection results of global and local primitive detection networks.
As a key enabler, we present a merging formulation that dynamically aggregates the primitives across global and local scales.
We train deeper and more accurate point processing networks by introducing three modular point processing blocks that improve memory consumption and accuracy.
By combining these blocks, we design wider and deeper point-based architectures.
Work Experience
June to December 2023: Research Intern at Meta AI
June to December 2022: Research Intern at Snapchat AR
June to November 2020: Research Intern at Adobe Research