You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the great work. I'm interested in knowing more details on how the model is trained, as there seems to be some inconsistency between the released checkpoint and the details described in the paper. In paper you mentioned that the model is trained on DIV2K, Flickr2K, OST, and the first 10, 000 face images from FFHQ for 500K steps. However, on the huggingface dataset page you released (https://huggingface.co/datasets/yangtao9009/PASD_dataset) there's also DIV8K and Unsplash2K. Are these two datasets also used in training? Also, the released model is "checkpoint-100000". Does that mean the model is trained for 100k steps instead of 500K steps?
Also, I'm wondering how long will it take to train the model on 8 v100 gpus, as mentioned in the paper. Thanks in advance.
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for the great work. I'm interested in knowing more details on how the model is trained, as there seems to be some inconsistency between the released checkpoint and the details described in the paper. In paper you mentioned that the model is trained on DIV2K, Flickr2K, OST, and the first 10, 000 face images from FFHQ for 500K steps. However, on the huggingface dataset page you released (https://huggingface.co/datasets/yangtao9009/PASD_dataset) there's also DIV8K and Unsplash2K. Are these two datasets also used in training? Also, the released model is "checkpoint-100000". Does that mean the model is trained for 100k steps instead of 500K steps?
Also, I'm wondering how long will it take to train the model on 8 v100 gpus, as mentioned in the paper. Thanks in advance.
The text was updated successfully, but these errors were encountered: