-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Lion optimizer #610
Add Lion optimizer #610
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR! Looking great!
For performance reasons, we reimplement our optimizers for torch using torch C++ ops called from Python -- here's an example of such a PR: https://github.com/keras-team/keras-core/pull/534/files
Would you be able to include the torch version of the optimizer in this PR?
keras_core/optimizers/lion.py
Outdated
|
||
def __init__( | ||
self, | ||
learning_rate=0.0001, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noticed this -- the LR here is 1e-4, but it should be 1e-3 like in the other optimizers. This was already an issue in tf.keras, we should fix it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
keras_core/optimizers/lion.py
Outdated
self.beta_2 = beta_2 | ||
if beta_1 <= 0 or beta_1 > 1: | ||
raise ValueError( | ||
"Argument `beta_1` must be between [0, 1]. Otherwise, the " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either "between 0 and 1" or "in the [0, 1] range"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
I didn't notice that there is a separate folder for torch's optimizers. As torch lacks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work! 👍
LGTM
Related to keras-team/keras#18442
EDITED: update learning_rate=0.001
The golden in
test_correctness_with_golden
is generated bytf.keras.optimizers.Lion(learning_rate=0.001)
with following script:# outputs [0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999] [0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998] [0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997] [0.996 0.996 0.996 0.996 0.996 0.996 0.996 0.996 0.996 0.996] [0.995 0.995 0.995 0.995 0.995 0.995 0.995 0.995 0.995 0.995]