-
Notifications
You must be signed in to change notification settings - Fork 309
minor modify #2386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minor modify #2386
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @pass-lin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces minor but impactful changes by expanding the supported model download sources and enhancing the compatibility of the SAM model. It aims to provide more flexibility for users to load models and improve the performance options for the SAM model.
Highlights
- Model Source Documentation: Added ModelScope as a new valid source for downloading models in the
from_presetdocumentation. - SAM Model BF16 Support: Enabled the SAM model's prompt encoder to support
bf16(bfloat16) inference by passing thedtypeto the positional embedding layer.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces two minor modifications: adding ModelScope as a download source in the documentation and enabling bfloat16 inference for the SAM model. My review includes a correction for a typo in the documentation. More importantly, while the change to support bfloat16 is a good start, it's incomplete. I've provided feedback on how to apply this change consistently across all relevant layers to ensure the feature works as intended and to prevent potential runtime issues.
| self.positional_embedding_layer = RandomFrequencyPositionalEmbeddings( | ||
| num_positional_features=self.hidden_size // 2, scale=1 | ||
| num_positional_features=self.hidden_size // 2, | ||
| scale=1, | ||
| dtype=self.dtype, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passing dtype=self.dtype here is a good step towards supporting bfloat16 inference. However, to ensure full compatibility and prevent potential dtype mismatches, this change should be applied consistently to all other layers instantiated within the __init__ method.
For example, the keras.layers.Embedding layers and the layers inside mask_downscaler are also missing the dtype argument. Without this, the model might not function correctly when using bfloat16 precision.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tested it, just need to modify it here.
keras_hub/src/models/backbone.py
Outdated
| 4. a ModelScope Face handle like `'modelscope://user/bert_base_en'` | ||
| 5. a path to a local preset directory like `'./bert_base_en'` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a small typo in the description for the ModelScope handle. It should be ModelScope handle instead of ModelScope Face handle for consistency with other parts of the codebase (e.g., keras_hub/src/utils/preset_utils.py) and to avoid confusion with Hugging Face.
| 4. a ModelScope Face handle like `'modelscope://user/bert_base_en'` | |
| 5. a path to a local preset directory like `'./bert_base_en'` | |
| 4. a ModelScope handle like `'modelscope://user/bert_base_en'` | |
| 5. a path to a local preset directory like `'./bert_base_en'` |
mattdangerw
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
* minor modify * fix
Mainly two simple modifications
1.Add a description of the modelscope download source from preset documents
2.Make the sam model support bf16 inference