Don't forget to follow the installation steps first: Installation
☝ NOTE: All the config files specify local paths relative to the root, so we recommend running all the commands from the root of the code directory to avoid path problems.
nn/evaluation_scripts/
contains several tools that performs evaluations of the trained models. Run the script of your intrest with --help
option to get the information about parameters.
maya_att_weights.py
is the exception, this is just a helper script to visualize attention weights predicted by a model (they are saved alongside the sewing pattern predictions) within Autodesk Maya environment
Every evaluation scrips takes in a config file which describes the experiment to evaluate. The scripts can work with either models saved locally or W&B runs.
-
Locally saved models. We provide pre-trained NeuralTailor models (patter shape prediction and stitch model prediction) in
./models/
folder. Corresponding configuration files (e.g.,./models/att/att.yaml
) contain full information about hyperparameters, dataset, and paths to the pre-trained model weights. You can similarly create configuration files to work with locally saved models produced by your experiments. -
Weights&Biases runs (easier for your trained models). When training a framework, all the experiment information is logged to W&B cloud. Evaluation scripts can work with those runs directly without a need to manually download models and fill configurations.
To run scripts with W&B runs simply provide the related info of project name, runs name and run id in the
experiment
section of configuration file, and specifyunseen_data_folders
ofdataset
section if evaluating on unseen garment types. What's specified in the rest of the config is irrelevant since it will overriden by the information from the cloud run. Here is an example of such evaluation config.
/nn/evaluation_scripts/on_test_set.py
allows updating some parameters of the dataset for evaluation purposes, e.g. add point cloud noise or evaluate on scan imitation version of the input garments.
To do so, specify new values in the load_dataset(..)
function calls in the script. The script itself contains some examples.
Evaluate NeuralTailor pattern shape prediction model on seen garment types only:
python nn/evaluation_scripts/on_test_set.py -sh models/att/att.yaml
NOTE: when evaluating only the pattern shape model without stitches, the stitches are transferred from the corresponding GT sewing patterns (if available) for convenience of loading and draping.
Evaluate full NeuralTailor framework on unseen garment types and save sewing pattern predictions:
python nn/evaluation_scripts/on_test_set.py -sh models/att/att.yaml -st models/att/stitch_model.yaml --unseen --predict
Evaluate stitch model on previously saved sewing pattern predictions:
python nn/evaluation_scripts/on_test_set.py -st models/att/stitch_model.yaml --pred_path /path/to/sewing/pathern/data
Evalute baseline model, LSTM + Stitch tags, on seen data types (it will produce both pattern shape and stitch quality stats):
python nn/evaluation_scripts/on_test_set.py -st models/baseline/lstm_stitch_tags.yaml
The training of NeuralTailor is two-step -- separately for Pattern Shape and Stitch Information models.
You can use config files saved in models/
as training configs.
- Pattern Shape Regression Network training
To run with our final NeuralTailor architecture setup simply run this command from the directory root:
Training on the full dataset will take 2-4 days depending on your hardware.
python nn/train.py -c ./models/att/att.yaml
- Stitch training
- Runs after the Shape Regression Network
- Update the name & id of your shape training run in the Stitch model config file, 'old_experiment' section. Setting this option enables training on the Pattern Shape predictions. Put the 'old_experiment' -> 'predictions' to False, or removing the 'old_experiment' section altogehter will result in training on GT sewing patterns.
- Run:
python nn/train.py -c ./models/att/stitch_model.yaml
The training script supports resuming of training runs (if stopped for any reason). Resume will be attempted automatically when running training script if run_id
is specified in the input config.
So, to resume a run, one need to simply specify the full info about the wandb run in experiment
section of training config, for example:
experiment:
project_name: Test-Garments-Reconstruction
run_name: NeuralTailor-Debug
run_id: uzcw54zu
run_id
is a hash found in W&B URL of the run page, e.g. link for above run looks like https://wandb.ai/wandb_username/Test-Garments-Reconstruction/runs/uzcw54zu
*
*Link is invalid and only provided as an example
By modifying the configuration files for corresponding models one could reproduce the setups used in our reported experiments. Some examples:
- setting
dataset->filter_by_params
option to empty string or null will force the training process to use full dataset without filtering out the desing overlaps. - changing the model class name (
NN->model
) toGarmentFullPattern3D
will result in training of LSTM-based model with global latent space and LSTM-based pattern decoder (our baseline) - adding
stitch, free_class
toloss_components
andquality_components
will enable training a model that predicts stitches using stitch tags as part of sewing pattern shape model. - changing
dataset->panel_classification
to./nn/data_configs/panel_classes_plus_one.json
will give you a run with alternative panel classes arrangement.
NOTE: if the config changes are expected to affect the list of the datapoints used for training (changing
filter_by_params
ormax_datapoints_per_type
), the provided data splits into train\valid\test might become invalid. Removedata_split->filename
to allow training process to create new split on the go. We only provide splits for dataset with and without parameter filtering (innn/data_configs
).
By default it sycronises the training run information with (Weights&Biases)[wandb.ai] cloud. To disable this sycronization (run offline), set your environemtal variable:
WANDB_MODE="offline"
Source: W&B Documentation
NOTE: All secondary scripts (eveluation, stitch training) will require setting up configs for using locally saved models (as described above) to evaluate on these offline runs.