Conversation
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback. |
0dfc743 to
8024990
Compare
8024990 to
abb99fc
Compare
abb99fc to
5dc8c1e
Compare
|
/gemini summarize |
|
/gemini review |
|
Hello paulb777, I understand you're looking for a code review. The In this conversation thread, I'm here to answer specific questions you might have about the code or previous review comments, or to provide targeted suggestions. If you'd like a full review of the pull request, please create a new comment with |
There was a problem hiding this comment.
Code Review
This pull request adds support for implicit caching in GenerateContentResponse by introducing cachedContentTokenCount and cacheTokensDetails to UsageMetadata. The changes are well-implemented, including updates to the data model, decoding logic, and comprehensive tests. I've included a few minor suggestions to improve test robustness, code style, and documentation clarity. Overall, this is a solid contribution.
FirebaseAI/Tests/TestApp/Tests/Integration/ImplicitCacheTests.swift
Outdated
Show resolved
Hide resolved
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
/gemini summarize |
Summary of ChangesThis pull request introduces comprehensive support for implicit caching within the FirebaseAI library. It enables the logging and retrieval of token usage metadata related to cached content, providing developers with insights into the efficiency of their generative model calls. The changes include adding new fields to the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
|
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Add metadata support for the two fields related to implicit caching.
Summary of Changes
This pull request introduces comprehensive support for implicit caching within the FirebaseAI library. It enables the logging and retrieval of token usage metadata related to cached content, providing developers with insights into the efficiency of their generative model calls. The changes include adding new fields to the
GenerateContentResponsestructure, updating its decoding logic, and implementing dedicated integration and unit tests to validate the new functionality.Highlights
GenerateContentResponse, allowing access tocachedContentTokenCountandcacheTokensDetailswithinUsageMetadatato track token savings from cached content.cachedContentTokenCount(Int) andcacheTokensDetails([ModalityTokenCount]) properties to theGenerateContentResponse.UsageMetadatastruct and updated itsDecodableconformance to correctly parse these new fields.ImplicitCacheTests.swift) to specifically verify the functionality and reporting of implicit caching, particularly for scenarios involving large contexts.APITests.swiftto ensure correct decoding and handling of the newcachedContentTokenCountandcacheTokensDetailsfields inGenerateContentResponse.CHANGELOG.mdfile.