-
Notifications
You must be signed in to change notification settings - Fork 26.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add lora-embedding bundle system #13568
Conversation
Yes, indeed. Pivotal tuning is a proven effective training method. Check out my Civitai profile: narugo1992 on Civitai, where over 500 models were trained using LoRA+pt. User feedback indicates excellent quality, currently ranking second on the site. Clearly, pivotal tuning has demonstrated its effectiveness and potential. Additionally, we conducted a quantitative analysis of existing pivotal tuning, referenced in this article: Article on pivotal tuning analysis. |
I have written a post in civitai to explain the benefits of pivotal tunining Not to mention that this seems to be the go-to method for many papers: [1] Kumari, N., Zhang, B., Zhang, R., Shechtman, E., & Zhu, J. Y. (2023). Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1931-1941). |
Need to add support for strin_to_param dict |
format: bundle_emb.EMBNAME.string_to_param.KEYNAME
Choose standalone embedding (in /embeddings folder) first
Any files to test this on? The huge copied block of code in |
@AUTOMATIC1111 Here is the demo of lora-emb bundle for this model rosmontis_arknights_bundle.zip |
Description
Some Trainer (like hcp-diffusion, naifu) have implemented a method of training called "pivtoal tuning", which is basically trained Embedding and LoRA(DreamBooth) at the same time.
But if the trainer want to train multiple concept within one model, it will produce lot of embedding files which is not very good for user and trainer to managing them.
So I developed this bundle system which can store the bundled embedding withing the lora files and load them with built-in lora extension.
The state_dict key name format:
"bundle_emb.EMBEDDING_NAME.KEY_NAME"
The KEY_NAME here means the "key" in the embedding state_dict, the EMBEDDING_NAME means the "trigger word" of that embedding. With this format we can store multiple embedding within 1 lora file, which is good for pivotal tuning.
Thx to @narugo1992 and @cyber-meow for this idea.
Checklist: