-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making the frame size bigger #4
Comments
I cherry picked awesome idea from https://github.com/dajes/AnimateDiff. It in devel branch. Still working with it. Changing pe size needs to retrain model. Too expensive for me. |
Yes this combination would be a perfect approach! I would be happy to do new training's and provide the GPU power for it. We could also have smaller models initially. Would you be able to make a model which does 52 motion frames? Would be very dope to have longer video's! @tumurzakov |
@Don-Chad I increased to 48 (24*2) by doubling pe tensors from original module and trained 1000 steps. It works well. It better than train from stretch. Main problem not in gpu power but in dataset. |
Wow! Would you please want to share the pipeline_animation which is doubled? (sorry I cannot find how to do this..) I would love to work on the dataset. I have a lot of good varied content with labels. Happy to share a new motion module. |
Trained 96 frames on A100 for 1000 steps (20 minutes). It took 21GB VRAM. It seems on A100 can be trained up to 184 frames. Infer on A100 took 20GB VRAM. But on that frame count could be problems with pe. In AnimateDiff pe got from NLP transformer. Possibly we could try ViT positional encodings there to encode longer videos |
Thanks kindly for sharing! Just one line makes a difference :-) Good to see it works. Let me give it a try. |
@tumurzakov What difference do you think ViT can make in this regard for PE? |
i cant seem to use the motion_module_pe_multiplier feature
|
Here is my config for 264 frames
take a look at
|
It works!
Thanks for sharing this.
Any idea how we could change the video lenght to something like 32 or 48? Longer motion would be great. At the moment it seems to be capped at 24.
It would be fine to start over, instead of using with the existing motion data set.
Error I am getting now is:
File "g:\content\animatediff\animatediff\models\motion_module.py", line 244, in forward
x = x + self.pe[:, :x.size(1)]
RuntimeError: The size of tensor a (32) must match the size of tensor b (24) at non-singleton dimension 1
The text was updated successfully, but these errors were encountered: