Skip to content

Conversation

@pass-lin
Copy link
Contributor

@pass-lin pass-lin commented Sep 6, 2025

Mainly two simple modifications

1.Add a description of the modelscope download source from preset documents
2.Make the sam model support bf16 inference

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @pass-lin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces minor but impactful changes by expanding the supported model download sources and enhancing the compatibility of the SAM model. It aims to provide more flexibility for users to load models and improve the performance options for the SAM model.

Highlights

  • Model Source Documentation: Added ModelScope as a new valid source for downloading models in the from_preset documentation.
  • SAM Model BF16 Support: Enabled the SAM model's prompt encoder to support bf16 (bfloat16) inference by passing the dtype to the positional embedding layer.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two minor modifications: adding ModelScope as a download source in the documentation and enabling bfloat16 inference for the SAM model. My review includes a correction for a typo in the documentation. More importantly, while the change to support bfloat16 is a good start, it's incomplete. I've provided feedback on how to apply this change consistently across all relevant layers to ensure the feature works as intended and to prevent potential runtime issues.

Comment on lines 69 to 73
self.positional_embedding_layer = RandomFrequencyPositionalEmbeddings(
num_positional_features=self.hidden_size // 2, scale=1
num_positional_features=self.hidden_size // 2,
scale=1,
dtype=self.dtype,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Passing dtype=self.dtype here is a good step towards supporting bfloat16 inference. However, to ensure full compatibility and prevent potential dtype mismatches, this change should be applied consistently to all other layers instantiated within the __init__ method.

For example, the keras.layers.Embedding layers and the layers inside mask_downscaler are also missing the dtype argument. Without this, the model might not function correctly when using bfloat16 precision.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tested it, just need to modify it here.

Comment on lines 133 to 134
4. a ModelScope Face handle like `'modelscope://user/bert_base_en'`
5. a path to a local preset directory like `'./bert_base_en'`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a small typo in the description for the ModelScope handle. It should be ModelScope handle instead of ModelScope Face handle for consistency with other parts of the codebase (e.g., keras_hub/src/utils/preset_utils.py) and to avoid confusion with Hugging Face.

Suggested change
4. a ModelScope Face handle like `'modelscope://user/bert_base_en'`
5. a path to a local preset directory like `'./bert_base_en'`
4. a ModelScope handle like `'modelscope://user/bert_base_en'`
5. a path to a local preset directory like `'./bert_base_en'`

Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@mattdangerw mattdangerw merged commit e7dd9b0 into keras-team:master Sep 10, 2025
6 of 7 checks passed
amitsrivastava78 pushed a commit to amitsrivastava78/keras-hub that referenced this pull request Oct 8, 2025
* minor modify

* fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants