- Emergent Behavior
- Posts
- Try It On At Home
Try It On At Home
solving clothes fitting
🔷 Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.
Background
In 2018, a Japanese entrepreneur by the name of Yusaku Maezawa paid Elon for the right to the first tourist moon flyby and also announced that he’d invented the Zozosuit, a 3D size capture suit that would allow you to size the clothes you bought from his Zozotown online store accurately.
It was named one of Time Magazine’s Best Inventions of 2018, and since then neither the suit nor Time Magazine has been heard from again.
E-commerce returns are a $1 trillion problem, specifically in the apparel, accessories, and footwear category, which comprises a whopping $250 billion of that. Closing the gap between online promise and physical product would do a lot to narrow that, but Zozotown failed.
But never doubt Zuck, the Lord of the Milennials, who has solved this as a side quest on his way to AGI/Metaverse.
Base Reality
Read in to 3D model
And play with it
Who: Researchers from Meta Reality Labs, including Nikolaos Sarafianos, Tuur Stuyck, Xiaoyu Xiang, Yilei Li, Jovan Popovic, and Rakesh Ranjan.
Why:
Current methods for creating virtual clothing are time-consuming and require specialized software and expertise
The team aimed to develop a method that allows users to quickly generate 3D textured clothes from a single image
Enabling rapid asset generation could unlock virtual applications at scale and assist in the design process
How:
They developed Garment3DGen, which transforms a base garment mesh into a simulation-ready asset directly from images or text prompts
The method leverages recent progress in image-to-3D diffusion models to generate 3D garment geometries
They propose using the generated geometries as pseudo ground-truth and set up a mesh deformation optimization procedure
Carefully designed losses allow the base mesh to freely deform toward the desired target while preserving mesh quality and topology
A texture estimation module generates high-fidelity texture maps that faithfully capture the input guidance
What did they find:
Garment3DGen can generate high-quality simulation-ready stylized garment assets complete with associated textures
The method outperforms prior work across all metrics while producing physically plausible and high-quality garment assets
The output geometries can be used for physics-based cloth simulation, hand-garment interaction in a VR environment, and going from a simple sketch to a drivable 3D garment
What are the limitations and what's next:
The method requires a template mesh, which limits the types of garments that can be generated while maintaining good mesh quality
Estimated textures sometimes do not fully preserve fine-level details
Future work could focus on providing a more diverse template library and tuning the texture enhancement module to better preserve details
Why it matters:
Garment3DGen enables rapid generation of 3D garments from a single image or text prompt without the need for artist intervention
The method could significantly reduce the time and expertise required to create virtual clothing assets
Enabling low-friction asset creation could be a key enabler for unlocking virtual applications at scale and facilitating faster exploration and creation of new designs
Additional notes:
The paper is set to be published in future conference proceedings (CVPR 2024)
Meta presents Garment3DGen!
A method that can stylize the geometry and textures from 2D image and 3D mesh garments, which can be fitted on top of parametric bodies and be simulated 🔥
More examples ⬇️
— Dreaming Tulpa 🥓👑 (@dreamingtulpa)
8:13 AM • Apr 3, 2024
Become a subscriber for daily breakdowns of what’s happening in the AI world:
Reply