Skip to content

Conversation

@annaJ2178
Copy link

Original initialization method (add lora before init_weight) leads lora_A initialization all zeros. Add lora after init_weights can fix that

…ization all zeros --> No training

Signed-off-by: annaJ2178 <[email protected]>
net.init_weights()

if config.use_lora:
self.add_lora(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for pointing it out, I agree adding loRA after weights initialization is better to preserve (random A, zero B) metrices. We will add the change in the code.

@arslananvidia
Copy link

@pjannaty please feel free to close this issue. It has been addressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants