Skip to content

[SIGGRAPH 2022] TightCap: 3D Human Shape Capture with Clothing Tightness Field, a human reconstruction framework for clothed avatars

Notifications You must be signed in to change notification settings

ChenFengYe/TightCap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TightCap: 3D Human Shape Capture with Clothing Tightness Field

Xin Chen, Anqi Pang, Wei Yang, Peihao Wang, Lan Xu, Jingyi Yu.
In SIGGRAPH 2022 (Journal Track).

This repository contains the official implementation for the paper: TightCap: 3D Human Shape Capture with Clothing Tightness Field (SIGGRAPH 2022). Our work is capable of capturing both the human shape and dressed garments accurately with only a single 3D human scan.

Abstract

In this paper, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single 3D human scan, which enables numerous applications such as virtual try-on, biometrics, and body evaluation. We propose a new clothing tightness dataset (CTD) of human scans with a large variety of clothing styles, poses, and corresponding ground-truth human shapes to stimulate further research. Extensive experiments demonstrate the effectiveness of our TightCap to achieve the high-quality human shape and dressed garments reconstruction, as well as the further applications for clothing segmentation, retargeting, and animation.

Citation

If you find our code or paper useful, please consider citing:

@article{chen2021tightcap,
  title={TightCap: 3D Human Shape Capture with Clothing Tightness Field},
  author={Chen, Xin and Pang, Anqi and Wei, Yang and Peihao, Wang and Xu, Lan and Yu, Jingyi},
  journal={ACM Transactions on Graphics (Presented at ACM SIGGRAPH)},
  year={2021}
}

You can also put a star ⭐, if the code is useful to you.

The CTD Dataset

TightCap proposes a new clothing tightness dataset (CTD) of human scans with a large variety of clothing styles, poses, and corresponding ground-truth human shapes to stimulate further research.

Download

You can download the CTD dataset (version 1.0) with the static/dynamic part from the OneDrive links. Please unzip each part with all downloaded volumes.

  • CTD_Static_part (20GB, 228 different garments) The static meshes, various clothed humans with segmented top/down clothing/shoes and carved bodies.
  • CTD_Dynamic_part (15GB, 652 annotated meshes from dynamic 3D human sequences, 13 sequences in total) The dynamic meshes, per-frame clothed humans with segmented top/down clothing/shoes and carved bodies.

Annotation

The CTD dataset contains various and rich annotations for different 2D/3D vision and graphics tasks, like human generation, body estimation, pose retargeting, clothing segmentation, virtual try-on, neural rendering and so on. The high quality textures support re-rendering multi-view virtual input images for algorithm verification and quantitative experiments.

For the static part, each folder (named with the mesh code, see below) includes naked, top, bottom, shoes and original textured mesh. We visualize all meshes for preview and provide the mesh part annotation in meshcode_labelsV.txt.

f: female, m: male
c: coat, d: dress, t: t-shirt, p: pants

For the dynamic part, each sequence also includes naked, top/bottom, and original textured mesh. These annotated frames are selected from original dynamic clip under 30 FPS. We also visualize these meshes. Moreover, we will release these dynamic sequences soon, and you can refer to Multi-view Neural Human Rendering (NHR) for more dynamic mesh sequences and the code for neural human rendering.

Tools - DomeViewer

DomeViewer is designed to support CTD dataset based on Unity3D engine, which can play 4D free-viewpoint video with dynamic mesh sequences. You can freely pause, play and fast forward the model after loading, and unlike traditional videos, you can control the rotation of the viewpoint with the right mouse button to experience 4D video clip.

Tools - Dataset

The tools folder includes the index lists for mesh details.

index_dynamic.xlsx, index_static.xlsx

Contactor

Xin Chen, Email: [email protected]

To encourage related research, we have provided the download link dataset. Please make sure the dataset is used for non-commercial purposes. Cosidering the privacy issue, do not give it to any third party or publish it publicly anywhere. If you download our dataset, it means you have agreed to our terms of access in the declaration.

Licenses

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

All material is made available under Creative Commons BY-NC-SA 4.0 license. You can use, redistribute, and adapt the material for non-commercial purposes, as long as you give appropriate credit by citing our paper and indicating any changes that you've made.

Relevant Works

Multi-view Neural Human Rendering (CVPR 2020)
Minye Wu, Yuehao Wang, Qiang Hu, Jingyi Yu

SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos (IJCV 2021)
Xin Chen, Anqi Pang, Wei Yang, Yuexin Ma, Lan Xu, Jingyi Yu

ChallenCap: Monocular 3D Capture of Challenging Human Performances using Multi-Modal References (CVPR Oral 2021)
Yannan He, Anqi Pang, Xin Chen, Han Liang, Minye Wu, Yuexin Ma, Lan Xu

AutoSweep: Recovering 3D Editable Objects from a Single Photograph (TVCG 2018)
Xin Chen, Yuwei Li, Xi Luo, Tianjia Shao, Jingyi Yu, Kun Zhou, Youyi Zheng

SMPL: A Skinned Multi-Person Linear Model (TOG 2015)
Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, Michael J. Black

Detailed, accurate, human shape estimation from clothed 3D scan sequences (CVPR 2017)
Chao Zhang, Sergi Pujades, Michael J Black and Gerard Pons-Moll

About

[SIGGRAPH 2022] TightCap: 3D Human Shape Capture with Clothing Tightness Field, a human reconstruction framework for clothed avatars

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published