Skip to content

Conversation

@meithon
Copy link

@meithon meithon commented Jul 2, 2025

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @meithon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the plugin's capabilities by adding support for OpenRouter as an alternative AI provider. It introduces a new configuration mechanism, integrates the OpenRouter API for chat completions, and updates the documentation to guide users on how to leverage these new features. Additionally, it includes improvements to code clarity through type hinting and a new sample file.

Highlights

  • OpenRouter API Integration: Implemented the core logic in lua/nes/api.lua to support making API calls to OpenRouter. This includes handling API key retrieval, constructing the request payload, and processing streaming responses from the OpenRouter chat completions endpoint.
  • Flexible Configuration: Introduced new configuration options via nes.setup() in lua/nes/init.lua and documented in README.md. Users can now specify provider (either 'copilot' or 'openrouter'), api_key, base_url, and model to customize their AI backend and model choice.
  • Improved Type Hinting: Added comprehensive LuaDoc type definitions in lua/nes/context.lua for key data structures like CurrentVersion, Context, Message, and Payload. This enhances code readability, maintainability, and tooling support.
  • New Sample File: A new Go language sample file, samplecode.go, has been added, likely for testing or demonstration purposes within the project's context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@meithon meithon marked this pull request as draft July 2, 2025 12:50
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for OpenRouter as a provider. The stream processing logic for OpenRouter responses is not robust and could fail if multiple events are received in one chunk. There are also some leftover debug print statements that should be removed. Finally, the README documentation for the new base_url configuration is inconsistent with the implementation, which could confuse users.

Comment on lines +95 to +115
stream = function(_, chunk)
if not chunk then
return
end
if vim.startswith(chunk, "data: ") then
chunk = chunk:sub(6)
end
if chunk == "[DONE]" then
return
end
local ok, event = pcall(vim.json.decode, chunk)
if not ok then
return
end
if event and event.choices and event.choices[1] then
local choice = event.choices[1]
if choice.delta and choice.delta.content then
output = output .. choice.delta.content
end
end
end,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current stream processing logic may not be robust enough to handle cases where multiple Server-Sent Events (SSE) arrive in a single data chunk. The implementation assumes one data: line per chunk, which could lead to JSON decoding errors and incomplete responses if the provider sends multiple events together.

I recommend iterating over each line in the chunk to process each event separately. This will make the streaming logic more resilient.

stream = function(_, chunk)
  if not chunk then
    return
  end
  for line in chunk:gmatch("[^\r\n]+") do
    if vim.startswith(line, "data: ") then
      local data = line:sub(7)
      if data == "[DONE]" then
        return
      end
      local ok, event = pcall(vim.json.decode, data)
      if ok and event and event.choices and event.choices[1] then
        local choice = event.choices[1]
        if choice.delta and choice.delta.content then
          output = output .. choice.delta.content
        end
      end
    end
  end
end

Comment on lines +42 to +43
base_url = 'https://api.openrouter.ai/v1', -- optional, default "https://api.openrouter.ai/v1"
model = 'gpt-3.5-turbo', -- optional, default "gpt-3.5-turbo" for openrouter, "copilot-nes-v" for copilot

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The documentation for base_url is inconsistent with the implementation. The example value and the default mentioned in the comment (https://api.openrouter.ai/v1) point to a base path, but the code expects the full endpoint URL for chat completions. The actual default in the code is https://openrouter.ai/api/v1/chat/completions.

This discrepancy will likely confuse users and lead to incorrect configurations. Please update the documentation to reflect the correct default value and clarify that the full endpoint URL is expected.

Suggested change
base_url = 'https://api.openrouter.ai/v1', -- optional, default "https://api.openrouter.ai/v1"
model = 'gpt-3.5-turbo', -- optional, default "gpt-3.5-turbo" for openrouter, "copilot-nes-v" for copilot
base_url = 'https://openrouter.ai/api/v1/chat/completions', -- optional, default "https://openrouter.ai/api/v1/chat/completions"
model = 'gpt-3.5-turbo', -- optional, default "gpt-3.5-turbo" for openrouter, "copilot-nes-v" for copilot

Comment on lines +118 to +119
-- __AUTO_GENERATED_PRINT_VAR_START__
print([==[M.call#if#callback output:]==], vim.inspect(output)) -- __AUTO_GENERATED_PRINT_VAR_END__

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It appears a debug print statement was left in the code. This should be removed before merging to avoid polluting the user's output.

Comment on lines +282 to +283
-- __AUTO_GENERATED_PRINT_VAR_START__
print([==[M.get_suggestion#(anon) stdout:]==], vim.inspect(stdout)) -- __AUTO_GENERATED_PRINT_VAR_END__

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This appears to be a leftover debug print statement. It should be removed before this branch is merged.

@Xuyuanp
Copy link
Owner

Xuyuanp commented Jul 3, 2025

Have you tried pr #6 ?

@meithon
Copy link
Author

meithon commented Jul 3, 2025

I haven't tried it. Should I give it a try?

@meithon
Copy link
Author

meithon commented Jul 3, 2025

Apparently if I use that I won't need this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants