Skip to content

Conversation

@Vivek1106-04
Copy link

Description

This PR lays the groundwork for supporting Llama 3.2 Vision (11B/90B) in Keras Hub (Relates to Issue #2470 ).

It introduces:

  1. Llama3VisionConfig: Extends the text backbone config to support vision_encoder_config and cross_attention_layers.
  2. Llama3VisionBackbone: A skeleton class to establish the API surface for the multimodal architecture.

Design Decisions

  • Inheritance: The Config inherits from Llama3BackboneConfig to reuse the existing RoPE and Transformer settings.
  • Cross-Attention Strategy: Added cross_attention_layers list to the config. This allows us to handle the specific "interleaved" architecture of Llama 3.2 (where vision injection happens only at specific depth indices) flexibly.

Next Steps

  • Implement the Llama3VisionEncoder class.
  • Implement the GatedCrossAttention block.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Vivek1106-04, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request lays the essential groundwork for integrating Llama 3.2 Vision models into Keras Hub. It introduces the foundational configuration class, Llama3VisionConfig, to manage vision-specific parameters, and a skeletal Llama3VisionBackbone class to define the multimodal model's architecture. This initial setup is crucial for enabling future development of vision-language capabilities within the Keras ecosystem.

Highlights

  • Llama 3.2 Vision Support Foundation: This PR establishes the initial configuration and a skeletal backbone for integrating Llama 3.2 Vision (11B/90B) models into Keras Hub, laying the groundwork for multimodal capabilities.
  • Llama3VisionConfig Introduction: A new configuration class, Llama3VisionConfig, is added. It extends Llama3BackboneConfig to include vision-specific parameters such as vision_encoder_config, vision_projection_dim, and cross_attention_layers.
  • Llama3VisionBackbone Skeleton: A Llama3VisionBackbone class is introduced as a placeholder to define the API surface for the multimodal architecture. It includes a basic constructor with input validation and a call method with TODOs for future implementation.
  • Flexible Cross-Attention Strategy: The design incorporates a cross_attention_layers list in the configuration, allowing for flexible handling of Llama 3.2's interleaved architecture where vision injection occurs at specific transformer layer depths.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the initial skeleton for the Llama 3.2 Vision model, including the configuration and backbone classes. The changes establish the basic structure, but there are critical issues with model serialization that need to be addressed. Specifically, the get_config methods in both new classes do not correctly handle serialization of configuration objects, which would prevent saving and loading models. I've provided suggestions to fix the serialization logic in Llama3VisionBackbone and pointed out a similar issue in Llama3VisionConfig. I also noted some minor formatting issues.

Comment on lines 36 to 37
def get_config(self):
return {"config": self.config} No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The get_config method is returning the self.config object directly. This object is not serializable, which will cause errors when trying to save the model (e.g., with model.save()). The get_config method must return a JSON-serializable dictionary.

To fix this and align with Keras serialization patterns, you should:

  1. In get_config, call super().get_config() to include base properties like name and trainable, as recommended by the style guide.
  2. Serialize the config object by calling its get_config() method.
  3. Implement the from_config classmethod to correctly reconstruct the backbone from the serialized configuration. This is necessary because the default from_config from Backbone won't know how to handle the nested config object.

Additionally, the file is missing a final newline. Please run the code formatter to fix this.

Suggested change
def get_config(self):
return {"config": self.config}
def get_config(self):
config = super().get_config()
config["config"] = self.config.get_config()
return config
@classmethod
def from_config(cls, config):
from .llama3_vision_config import Llama3VisionConfig
config_data = config.pop("config")
vision_config = Llama3VisionConfig(**config_data)
return cls(config=vision_config, **config)
References
  1. The get_config() method should chain to its superclass to ensure base properties are preserved, as outlined in the layer implementation guidelines. (link)
  2. All components must be serializable to support saving and loading, which is a core part of the validation process. (link)
  3. Code should be formatted with ruff. A missing final newline is a formatting issue that the tool would fix. (link)

Comment on lines 40 to 47
config.update(
{
"vision_encoder_config": self.vision_encoder_config,
"vision_projection_dim": self.vision_projection_dim,
"cross_attention_layers": self.cross_attention_layers,
}
)
return config No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The get_config method directly includes self.vision_encoder_config. According to the docstring, this can be a config instance, which is not JSON-serializable. This will break serialization when Llama3VisionBackbone.get_config() is called. To ensure the entire configuration is serializable, you should convert self.vision_encoder_config to a dictionary if it's an object, for instance by calling a .get_config() method on it. This is crucial for model saving and loading.

Also, the file is missing a final newline. Please run the code formatter to add it.

References
  1. All components must be serializable to support saving and loading. Config objects must return a serializable dictionary from get_config. (link)
  2. Code should be formatted with ruff. A missing final newline is a formatting issue that the tool would fix. (link)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant