Replies: 1 comment
-
CLIP just doesn't support that, long sequences aren't its thing. add |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am a newbee from OneTrain to SimpleTune, because can use multiple gpu, when i first train lora, i have a warning: "2024-08-10 13:08:55,409 [WARNING] (TextEmbeddingCache) The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['water, architectural structure, picturesque, misty background']
And I want to change to rank 128/128 in lora, please help.
Beta Was this translation helpful? Give feedback.
All reactions