Skip to content

{tool/claudecode, docs}: add Claude Code toolset#1656

Merged
sandyskies merged 20 commits intomainfrom
tool/claudecode-add-claude-code-toolset
Apr 24, 2026
Merged

{tool/claudecode, docs}: add Claude Code toolset#1656
sandyskies merged 20 commits intomainfrom
tool/claudecode-add-claude-code-toolset

Conversation

@Flash-LHR
Copy link
Copy Markdown
Collaborator

Add a standalone tool/claudecode module that exposes a Claude Code-style code workflow toolset for file editing, repository search, shell execution, task management, and web retrieval, and document how to integrate and configure it in both the Chinese and English tool guides.

Add a dedicated tool/claudecode module with Claude Code-style file, search, shell, task, and web tools, and document the ToolSet in both Chinese and English.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 20, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new claude-style "claudecode" ToolSet implementing file read/write/edit/notebook editing, bash/task execution (foreground/background), glob/grep, web fetch/search, PDF handling, and supporting runtime/file/task state with comprehensive tests and configurable options (base dir, read-only, web fetch/search, max file size).

ZH:新增 claude 风格的 claudecode 工具集,包含文件读写/编辑/笔记本编辑、bash/任务执行(前台/后台)、glob/grep、网页抓取/搜索、PDF 处理及运行时/文件/任务状态管理与全面测试,支持 baseDir、只读、网页抓取/搜索、最大文件大小等配置。

Changes

Cohort / File(s) Summary
Docs
docs/mkdocs/en/tool.md, docs/mkdocs/zh/tool.md
New documentation for the tool/claudecode ToolSet: capabilities, default tools, read-only behavior, Go usage snippet, and common options.
Module & constants
tool/claudecode/go.mod, tool/claudecode/constants.go
New module for tool/claudecode with dependencies; centralized constants and lazy lookups for ripgrep/pdftoppm.
Toolset entry & types
tool/claudecode/toolset.go, tool/claudecode/types.go, tool/claudecode/options.go
Public NewToolSet factory, composite toolset wrapper, option types/constructors, and runtime/type definitions for file/task state.
File I/O & state
tool/claudecode/common.go, tool/claudecode/file_state.go, tool/claudecode/read.go, tool/claudecode/write.go, tool/claudecode/edit.go
Path normalization, encoding helpers, snapshotting, read/write/edit tooling with stale-checks, structured patches, quote-style preservation, PDF/image/notebook read flows.
Notebook support
tool/claudecode/notebook_edit.go
Notebook (.ipynb) edit tool with replace/insert/delete modes, cell-id/index resolution, JSON marshaling and state integration.
Execution & task control
tool/claudecode/process.go, tool/claudecode/bash.go, tool/claudecode/task_stop.go, tool/claudecode/task_output.go
Process capture utilities; bash tool with foreground timeout and background task management/logging; task stop and task output retrieval tools with polling/timeout.
Search & glob
tool/claudecode/glob.go, tool/claudecode/grep.go
Glob tool using doublestar; grep tool using ripgrep when available with robust fallback supporting multiline, modes (content/files_with_matches/count) and pagination.
Web tools
tool/claudecode/web_fetch.go, tool/claudecode/web_search.go
Web fetch with domain allow/deny, same-host redirect handling, HTML extraction and prompt-processor hook; web search with DuckDuckGo scraping and Google Custom Search backend, deduplication and domain filtering.
PDF utilities
tool/claudecode/pdf.go
PDF page-range parsing, page counting and rendering via pdftoppm, with validation and temp-dir image output handling.
Tests
tool/claudecode/claudecode_test.go
Extensive tests exercising toolset creation, read-only gating, bash (foreground/background/timeout), file read/write/edit semantics, notebook edits, glob/grep, web fetch/search, PDF handling, and many helper branches.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Suggested reviewers

  • WineChord
  • liuzengh
  • sandyskies
🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 2.68% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title '{tool/claudecode, docs}: add Claude Code toolset' accurately and concisely describes the main change: introducing a new toolset module (tool/claudecode) and accompanying documentation.
Description check ✅ Passed The description is directly related to the changeset, explaining the addition of a standalone tool/claudecode module for code workflows and documentation in both Chinese and English guides.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch tool/claudecode-add-claude-code-toolset

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread tool/claudecode/task_stop.go
Comment thread tool/claudecode/common.go
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 20, 2026

Codecov Report

❌ Patch coverage is 85.26356% with 383 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.29066%. Comparing base (1811a98) to head (d4cbcb6).
⚠️ Report is 31 commits behind head on main.

Files with missing lines Patch % Lines
tool/claudecode/grep.go 80.29197% 78 Missing and 30 partials ⚠️
tool/claudecode/file_state.go 79.76654% 35 Missing and 17 partials ⚠️
tool/claudecode/notebook_edit.go 88.27839% 22 Missing and 10 partials ⚠️
tool/claudecode/common.go 89.23077% 19 Missing and 9 partials ⚠️
tool/claudecode/web_search.go 89.78102% 19 Missing and 9 partials ⚠️
tool/claudecode/process.go 77.11864% 19 Missing and 8 partials ⚠️
tool/claudecode/read.go 82.97872% 20 Missing and 4 partials ⚠️
tool/claudecode/toolset.go 75.58140% 14 Missing and 7 partials ⚠️
tool/claudecode/pdf.go 83.67347% 11 Missing and 5 partials ⚠️
tool/claudecode/glob.go 78.18182% 9 Missing and 3 partials ⚠️
... and 6 more
Additional details and impacted files
@@                 Coverage Diff                 @@
##                main       #1656         +/-   ##
===================================================
- Coverage   89.68232%   89.29066%   -0.39167%     
===================================================
  Files            784         879         +95     
  Lines         131832      179180      +47348     
===================================================
+ Hits          118230      159991      +41761     
- Misses          8621       13871       +5250     
- Partials        4981        5318        +337     
Flag Coverage Δ
unittests 89.29066% <85.26356%> (-0.39167%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

🧹 Nitpick comments (7)
tool/claudecode/glob.go (1)

49-49: Consider explicit pattern validation for defense-in-depth clarity.

os.DirFS already enforces io/fs.ValidPath, which rejects paths containing .. elements. However, because in.Pattern comes from external input (model output), adding explicit checks before doublestar.Glob makes the security boundary explicit rather than implicit—this is a best practice for framework code when handling untrusted input.

中文

考虑在明确处理 pattern 校验,加强防御深度的清晰度。

os.DirFS 已通过 io/fs.ValidPath 强制执行,拒绝包含 .. 的路径。但由于 in.Pattern 来自外部输入(模型输出),在调用 doublestar.Glob 前明确校验,可使安全边界更明确而非隐性,这是框架代码处理不信任输入时的最佳实践。

Optional: Suggested pattern validation
-			matches, err := doublestar.Glob(os.DirFS(searchDir), in.Pattern, doublestar.WithCaseInsensitive())
+			// Validate pattern: reject absolute paths and parent traversal attempts
+			// os.DirFS already enforces this via io/fs.ValidPath, but explicit check clarifies intent
+			if filepath.IsAbs(in.Pattern) || strings.Contains(filepath.ToSlash(in.Pattern), "..") {
+				return globOutput{}, fmt.Errorf("invalid glob pattern: absolute paths and parent directory traversal are not allowed")
+			}
+			matches, err := doublestar.Glob(os.DirFS(searchDir), in.Pattern, doublestar.WithCaseInsensitive())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/glob.go` at line 49, Add explicit validation for the
untrusted in.Pattern before calling doublestar.Glob: ensure the pattern is not
absolute (no leading '/'), does not contain any ".." path element, and each path
segment passes io/fs.ValidPath (use strings.Split on '/' and call fs.ValidPath
for each segment) and reject/back out with a clear error if validation fails;
update the code around the doublestar.Glob call (references: in.Pattern,
searchDir, doublestar.Glob, os.DirFS) so only validated patterns are passed to
doublestar.Glob.
tool/claudecode/bash.go (1)

153-155: Use errors.Is for deadline detection.

err == context.DeadlineExceeded works today because context.WithTimeout stores the sentinel directly, but it breaks silently the moment anything wraps ctx.Err() (e.g., a parent context.Cause, a derived context, or a future change in runCapturedProcess).

♻️ Suggested change
 func errorsIsDeadlineExceeded(err error) bool {
-	return err == context.DeadlineExceeded
+	return errors.Is(err, context.DeadlineExceeded)
 }
中文

err == context.DeadlineExceeded 目前能工作,但一旦错误被包装(context.Cause、派生 context,或未来 runCapturedProcess 改写返回)就会失效。建议直接使用 errors.Is(err, context.DeadlineExceeded)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/bash.go` around lines 153 - 155, Replace the direct equality
check in errorsIsDeadlineExceeded with a proper wrapped-error check: change the
implementation of errorsIsDeadlineExceeded to use errors.Is(err,
context.DeadlineExceeded) so it correctly detects deadline-exceeded even when
ctx.Err() is wrapped; update any callers if necessary that rely on the old
boolean behavior.
tool/claudecode/web_search.go (1)

61-74: Return empty output on backend error, don't surface partial results alongside err.

When target.search fails, the code still builds webSearchOutput{Query, Results, DurationSeconds} and returns (output, err). Callers that only check err will discard output, which is fine — but the duration/query fields are somewhat misleading to anyone who does read through an error, and it's inconsistent with the webFetch path that returns a zero-value output on error. Prefer returning webSearchOutput{}, err to keep error semantics consistent and avoid any chance of partial results leaking.

中文

后端 search 报错时,这里仍然把 Query/Results/DurationSeconds 装好一并返回 (output, err)。虽然遵循 "err != nil 则忽略值" 的调用方会丢弃 output,但和 webFetch 错误时返回零值 output 的风格不一致,也有可能泄漏部分结果。建议出错时直接 return webSearchOutput{}, err

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/web_search.go` around lines 61 - 74, When target.search
returns an error, don't construct and return a partial webSearchOutput; instead
check err immediately after calling target.search and on error return
webSearchOutput{} and err. Move or add the err check right after the call to
target.search in the function that builds the webSearchOutput (the block that
currently constructs Results, Query and DurationSeconds), so you only populate
and return webSearchOutput when err == nil; this keeps behavior consistent with
the webFetch path.
tool/claudecode/constants.go (1)

48-56: Package-level mutable lookup hooks: document that they're test-only seams.

ripgrepLookPath / pdftoppmLookPath are exported-at-package-scope function variables and reset by tests (withRipgrepForTest also re-assigns ripgrepOnce = sync.Once{}). This works, but production code racing with a test-only override is a subtle footgun for anyone adding parallel tests later. Consider (a) moving overrides behind an unexported helper (e.g. setRipgrepLookPathForTest) and/or (b) a short comment on these vars explicitly marking them as test-only.

中文

ripgrepLookPath / pdftoppmLookPath 是包级可变函数变量,测试里直接改写(withRipgrepForTest 还会重置 ripgrepOnce = sync.Once{})。目前能跑通,但以后新增并行测试很容易踩坑。建议(a)把覆盖封装到一个未导出的辅助函数里(如 setRipgrepLookPathForTest),并/或(b)在这些变量上加一行注释,明确标注仅供测试使用。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/constants.go` around lines 48 - 56, The package exposes
mutable function variables ripgrepLookPath and pdftoppmLookPath which tests
reassign and which, along with ripgrepOnce being reset in withRipgrepForTest,
can race with production code; change this by encapsulating test overrides
behind unexported setter helpers (e.g. add setRipgrepLookPathForTest and
setPdftoppmLookPathForTest used only by tests and stop direct reassignment in
tests) or at minimum add clear package-level comments on ripgrepLookPath and
pdftoppmLookPath marking them as test-only seams and warning about concurrent
use; update withRipgrepForTest to use the new setter(s) and avoid resetting
ripgrepOnce from test code directly.
tool/claudecode/web_fetch.go (1)

78-116: Same-host redirect handling looks correct; minor: "too many redirects" loses the last URL.

When the 5-redirect cap is hit, the function returns "" as finalURL. The cross-host short-circuit path already returns the original URL with a helpful message, so this last branch being a bare error is OK, but including the last currentURL in the error (or as finalURL) would make the failure easier to diagnose for tool users.

Also minor: there is no explicit status-code guard that only 3xx responses are treated as redirects — the current StatusMultipleChoices <= code < StatusBadRequest check covers 304 Not Modified, which will attempt a Location-based redirect (empty Location → returns 304 as-is, fine) — worth being aware of but not a bug.

中文

重定向次数超限时,finalURL 返回了 ""。跨主机分支已经把原 URL 和状态信息带回去了,这里只返回裸错误没致命问题,但把最后一次 currentURL 带进错误信息(或 finalURL)会更利于排查。另外:当前把 3xx 整段都当作重定向,包含 304 Not Modified(没有 Location 时会按原样返回,问题不大),留个心眼就好。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/web_fetch.go` around lines 78 - 116, The loop can hit the
redirect cap and currently returns an empty finalURL; update the final return to
include the last attempted URL (currentURL) and an explicit error so callers can
see where it stopped. In the function handling redirects (the loop using
resolveRedirectURL, searchURLHost and readHTTPBody), change the terminal return
from returning "" to returning currentURL (and the same zeroed status/value
slots) with an error such as fmt.Errorf("too many redirects: last URL %s",
currentURL) so the caller gets the last URL attempted.
tool/claudecode/claudecode_test.go (1)

138-168: Timing-sensitive background-task test: risk of flakiness under load.

sleep 0.3 followed by a Timeout: 2_000 blocking read is fine in isolation, but this pattern is exactly what gets flaky on shared CI runners (GC pauses, disk-backed temp dir, loaded hypervisors). Consider one of:

  • Use a deterministic sync primitive: have the background command write a marker file / pipe and poll for it instead of sleep.
  • Raise the blocking timeout substantially (e.g. 10s) — still cheap on happy path, much more forgiving when the CI is slow.
  • Only assert Contains on a substring that's guaranteed after printf 'a', decoupling from the sleep.
中文

sleep 0.3 + Timeout: 2_000 阻塞读,本地跑稳稳的,但在共享 CI 上(GC 停顿、磁盘 tmp、忙 hypervisor)正是典型的偶发失败模式。建议:

  • 用确定性同步:后台命令写 marker 文件/管道,测试端 poll,而不是 sleep
  • 把阻塞超时提到例如 10s,happy path 依然很快,CI 慢时更宽容。
  • 断言改成只依赖 printf 'a' 一定会出现的子串,去掉对 sleep 的时间耦合。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/claudecode_test.go` around lines 138 - 168,
TestToolSet_TaskOutputReadsBackgroundBashTask is timing-sensitive: replace the
fragile "sleep 0.3" approach with a deterministic sync or widen the blocking
timeout; e.g., change the bashInput Command used to start the background task to
create a ready marker (touch a temp file or write a pipe) after printing 'a' and
have the test poll for that marker before asserting, or simply increase
taskOutputInput.Timeout from 2_000 to a larger value like 10_000 to reduce CI
flakiness; update references in the test to use bgOut.BackgroundTaskID,
taskOutputInput.Block/Timeout and ensure the assertions (RetrievalStatus,
Task.Output, Task.Status) run only after the marker/poll or extended timeout.
tool/claudecode/file_state.go (1)

189-203: preserveQuoteStyle double-applies curly conversion when old text mixes both quote styles.

If actualOldString contains both "…" curly double quotes and '…' curly single quotes, both applyCurlyDoubleQuotes and applyCurlySingleQuotes run on newString. That's usually fine, but it assumes the author of NewString wants curly conversion for every straight quote in the replacement — including ones that are semantically ASCII (e.g. code snippets pasted inside prose). Since there's no way for the caller to opt out, consider restricting curly conversion to spans that correspond to the matched oldString region rather than the entire NewString, or at least document the behavior in the tool description so callers can use find_actual_string == old_string as an escape hatch.

中文

actualOldString 里同时有弯双引号和弯单引号,applyCurlyDoubleQuotesapplyCurlySingleQuotes 都会作用于整段 newString,等于把替换文本里所有直引号都假设成 "本意应是弯引号",对混了代码片段的替换很可能误伤。建议要么把弯引号转换范围限制到旧串对应区间、要么在工具描述里写清楚这一行为,让调用方可以有意识地规避。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/file_state.go` around lines 189 - 203, preserveQuoteStyle is
currently applying applyCurlyDoubleQuotes and applyCurlySingleQuotes to the
entire newString whenever actualOldString contains curly quotes; instead,
compute the substring of newString that corresponds to the replaced region (the
span that replaces oldString/actualOldString), run applyCurlyDoubleQuotes and/or
applyCurlySingleQuotes only on that slice (based on which curly types appear in
actualOldString), then reassemble newString with the converted slice; if you
cannot reliably map the span, add documentation that callers should pass
actualOldString == oldString to opt out. Use the function name
preserveQuoteStyle and the parameters oldString, actualOldString, newString as
the places to implement this change, and keep
applyCurlyDoubleQuotes/applyCurlySingleQuotes for the localized conversion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/mkdocs/zh/tool.md`:
- Around line 547-558: The example import block is missing the standard "log"
package causing the call to log.Fatal(err) to not compile; update the import
list to include "log" so the snippet that constructs toolSet via
claudecode.NewToolSet (using claudecode.WithBaseDir(".")) and handles the error
with log.Fatal(err) compiles correctly.

In `@tool/claudecode/bash.go`:
- Around line 86-131: The background tasks started by runBackgroundCommand
(creating backgroundTask entries in runtime.taskState.tasks) never get cleaned
up; update compositeToolSet.Close() to iterate runtime.taskState.tasks, for each
task with Status "running" call the process kill/terminate method (use the
Process stored on backgroundTask), close and remove the associated output file
(store the outputFile handle or ensure you can open and close OutputPath),
set/await task status to terminated, delete the log file from
os.TempDir()/trpc-agent-go-claudecode and clear the tasks map; also modify
runBackgroundCommand to store the outputFile handle (or a closer) on
backgroundTask so Close() can close it and ensure the goroutine that calls
process.Wait() will return promptly when Close() kills the process; optionally
add a TTL or size cap for completed entries in taskState.tasks and remove their
logs after eviction.

In `@tool/claudecode/claudecode_test.go`:
- Around line 991-1004: withRipgrepForTest currently reassigns the package-level
ripgrepOnce (and ripgrepPath / ripgrepLookPath) while only holding
ripgrepTestMu, which races with ripgrepCommand calling ripgrepOnce.Do() without
locking; fix by protecting all accesses to ripgrepOnce, ripgrepPath, and
ripgrepLookPath: either (A) change ripgrepTestMu to a sync.RWMutex and acquire a
read lock in ripgrepCommand (and any other readers) and a write lock in
withRipgrepForTest, or (B) keep ripgrepTestMu and add the same mutex lock/unlock
around reads in ripgrepCommand so every reader uses the mutex; update references
in withRipgrepForTest and ripgrepCommand accordingly to eliminate the data race.

In `@tool/claudecode/common.go`:
- Around line 172-211: decodeTextBytes and isProbablyBinary only recognize
UTF-16LE BOM (0xFF 0xFE) causing UTF-16BE (0xFE 0xFF) and UTF-8 BOM (0xEF 0xBB
0xBF) files to be misclassified as binary; update both functions to detect
UTF-16BE and UTF-8 BOMs: in decodeTextBytes, check for EF BB BF (strip and
decode as utf8), FE FF (use textunicode.UTF16(textunicode.BigEndian,
textunicode.ExpectBOM).NewDecoder() and return "utf16be"), and FF FE as
currently implemented; in isProbablyBinary, treat presence of the UTF-8 BOM and
UTF-16BE/LE BOMs as non-binary (whitelist EF BB BF and FE FF in addition to FF
FE) so files with zero bytes from UTF-16BE aren't flagged as binary. Ensure the
encoding string returned/checked is consistent ("utf8", "utf16le", "utf16be").
- Around line 29-55: normalizePath currently only does lexical checks and can be
bypassed by symlinks; update normalizePath to EvalSymlinks the target path (for
absolute paths) or the deepest existing ancestor (for relative paths) and then
re-run the containment check using filepath.Rel against cleanBase, returning an
error if the resolved path is outside baseDir; use filepath.EvalSymlinks and the
same ".." prefix logic shown in normalizePath so both the abs-path branch
(cleanPath) and the relative branch (cleanPath -> absPath) validate the resolved
path before returning.

In `@tool/claudecode/constants.go`:
- Line 40: The 1 GiB constant maxEditableFileSize is too large for an in-memory
read path and can cause multi‑GiB allocations when used by editLocalFile →
readLocalFileSnapshot → os.ReadFile; lower the default (e.g. a few hundred MiB)
and/or make it configurable on the ToolSet so callers can override it, and add
an early guard in readLocalFileSnapshot (or editLocalFile) that rejects files
exceeding the threshold or non-text/binary files detected before calling
os.ReadFile; update places referencing maxEditableFileSize to use the new
configurable value and add a clear error message when refusing the file.

In `@tool/claudecode/file_state.go`:
- Around line 63-79: ensureWriteAllowed (and similarly storeReadView and
matchesReadView) currently access state.views without internal synchronization
and also fails to update view.Timestamp when snapshot.Content matches; to fix,
move locking into these functions (acquire/release state.mu inside
ensureWriteAllowed, storeReadView, matchesReadView) so callers need not hold the
lock, and while doing so update the cached view by setting view.Timestamp =
snapshot.Timestamp before returning when snapshot.Timestamp > view.Timestamp &&
snapshot.Content == view.Content; ensure all accesses use the locked state.views
map and preserve existing semantics.

In `@tool/claudecode/go.mod`:
- Around line 32-50: The submodule's go.mod pins vulnerable transitive versions
(go.opentelemetry.io/otel/sdk v1.29.0 and google.golang.org/grpc v1.65.0);
update the root module's direct dependency versions (bump otel to >= v1.40.0 and
grpc to >= v1.79.3 or newer), then run "go mod tidy -C tool/claudecode" to
refresh the tool/claudecode/go.mod and go.sum, verify the updated require
entries for go.opentelemetry.io/otel/sdk and google.golang.org/grpc are present
in tool/claudecode/go.mod, and commit the updated go.sum.

In `@tool/claudecode/notebook_edit.go`:
- Around line 145-198: The bug is that a model-controlled CellID can make
parseNotebookCellID produce cellIndex == len(cells) and insertNotebookCell then
computes insertAt = cellIndex+1 causing a slice panic; to fix, clamp insertAt
inside insertNotebookCell (or validate earlier) so it never exceeds
len(state.cells): compute insertAt based on strings.TrimSpace(in.CellID) but
then set insertAt = min(max(0, insertAt), len(state.cells)); update
insertNotebookCell (and keep applyNotebookEdit/parseNotebookCellID logic) to use
the clamped insertAt before slicing state.cells.

In `@tool/claudecode/options.go`:
- Around line 63-65: The presence flags (hasMaxSize, hasWebFetch, hasWebSearch)
are set but never used; update the consumer logic so these flags control
behavior: when !hasMaxSize, pass a sensible default instead of
options.maxFileSize into newToolRuntime so you can distinguish "unset" from
"zero"; in appendWebTools only construct/append webFetchTool and webSearchTool
if hasWebFetch/hasWebSearch are true (i.e., skip calling newWebSearchTool when
WithWebSearchOptions was never used); and change the zero-config WebFetchOptions
default to AllowAll: false (or require explicit opt-in) rather than AllowAll:
true to avoid silently enabling URL fetches. Ensure you reference the flags
(hasMaxSize/hasWebFetch/hasWebSearch), options.maxFileSize, newToolRuntime,
appendWebTools, WithWebSearchOptions, newWebSearchTool, and WebFetchOptions when
making the changes.

In `@tool/claudecode/pdf.go`:
- Around line 104-124: The extractPDFPages helper currently uses
exec.Command(...).CombinedOutput() with no context or timeout; change its
signature to accept a context.Context (e.g., func extractPDFPages(ctx
context.Context, filePath string, pageRange pdfPageRange) (string, int, error)),
create a bounded context for the pdftoppm call (use context.WithTimeout(ctx,
renderTimeout) where renderTimeout is a sensible constant), invoke
exec.CommandContext on pdftoppmPath with the same args and call
cmd.CombinedOutput() so the external process is killed when the context times
out/cancels, and update any callers to forward their ctx into extractPDFPages;
keep pdftoppmBinary usage and error handling otherwise unchanged.

In `@tool/claudecode/read.go`:
- Around line 109-150: readPDF is leaking temp dirs created by extractPDFPages
because it returns readFile.OutputDir pointing to os.MkdirTemp("") with no
lifecycle management; change extractPDFPages/readPDF to create per-session
output directories under runtime.baseDir (or a ToolSet-owned dir) instead of
os.TempDir, record created dirs in runtime.fileState (or taskState) when
returning readFile.OutputDir, and ensure those paths are removed during
ToolSet.Close()/taskState.Close() cleanup; alternatively, for small
renderedCount inline JPEGs as base64 into readFile (like the image branch) and
only produce disk output when caller explicitly requests it.

In `@tool/claudecode/task_stop.go`:
- Around line 42-51: The code marks task.Status = "killed" before validating
process and before process.Kill() succeeds, which can report tasks as killed
even if Kill fails; change the logic in the task stop path so you first check
process != nil, call process.Kill(), handle and return any error (e.g.,
returning taskStopOutput{} and the error) and only after Kill() succeeds set
task.Status = "killed" (and then release runtime.taskState.mu.Unlock()); ensure
references to taskID, task.Status, process.Kill(),
runtime.taskState.mu.Unlock(), and the returned taskStopOutput are used to
locate and update the code accordingly.

In `@tool/claudecode/tool_search.go`:
- Around line 208-225: In splitToolSearchTerms, a token like "+r" falls through
to optional with the leading '+' preserved; fix by detecting and trimming a
leading '+' first (use strings.HasPrefix and strings.TrimPrefix on the variable
part) and then apply the same minimum-length check (len >= 2) to decide required
vs optional; ensure required tokens are appended without the '+' (modify the
logic around the required branch in splitToolSearchTerms so you trim before
length checks and then append the trimmed token to required or optional
accordingly).

In `@tool/claudecode/toolset.go`:
- Around line 70-88: The appendCoreTools path currently always includes
newBashTool (violating WithReadOnly semantics) and never appends
newAskUserQuestionTool; update appendCoreTools so that when readOnly is true you
skip/omit newBashTool (e.g., don't call newBashTool or remove it from coreTools
when readOnly) and ensure newAskUserQuestionTool is constructed and appended to
cc.tools (placed before the readOnly early-return or appended unconditionally as
intended). Reference: function appendCoreTools, builders newBashTool and
newAskUserQuestionTool, and the readOnly parameter/WithReadOnly behavior.

In `@tool/claudecode/web_search.go`:
- Around line 285-339: The code double-applies pagination: Google Custom Search
already uses b.options.Size and b.options.Offset via query params, so remove the
local slicing; replace the final return that calls applySearchWindow(hits,
webSearchOffset(b.options), webSearchSize(b.options)) with returning hits
directly (i.e., return hits, nil). Ensure any references to
webSearchOffset/webSearchSize or applySearchWindow in this function (and any
unreachable code after removal) are not used so the function compiles.

---

Nitpick comments:
In `@tool/claudecode/bash.go`:
- Around line 153-155: Replace the direct equality check in
errorsIsDeadlineExceeded with a proper wrapped-error check: change the
implementation of errorsIsDeadlineExceeded to use errors.Is(err,
context.DeadlineExceeded) so it correctly detects deadline-exceeded even when
ctx.Err() is wrapped; update any callers if necessary that rely on the old
boolean behavior.

In `@tool/claudecode/claudecode_test.go`:
- Around line 138-168: TestToolSet_TaskOutputReadsBackgroundBashTask is
timing-sensitive: replace the fragile "sleep 0.3" approach with a deterministic
sync or widen the blocking timeout; e.g., change the bashInput Command used to
start the background task to create a ready marker (touch a temp file or write a
pipe) after printing 'a' and have the test poll for that marker before
asserting, or simply increase taskOutputInput.Timeout from 2_000 to a larger
value like 10_000 to reduce CI flakiness; update references in the test to use
bgOut.BackgroundTaskID, taskOutputInput.Block/Timeout and ensure the assertions
(RetrievalStatus, Task.Output, Task.Status) run only after the marker/poll or
extended timeout.

In `@tool/claudecode/constants.go`:
- Around line 48-56: The package exposes mutable function variables
ripgrepLookPath and pdftoppmLookPath which tests reassign and which, along with
ripgrepOnce being reset in withRipgrepForTest, can race with production code;
change this by encapsulating test overrides behind unexported setter helpers
(e.g. add setRipgrepLookPathForTest and setPdftoppmLookPathForTest used only by
tests and stop direct reassignment in tests) or at minimum add clear
package-level comments on ripgrepLookPath and pdftoppmLookPath marking them as
test-only seams and warning about concurrent use; update withRipgrepForTest to
use the new setter(s) and avoid resetting ripgrepOnce from test code directly.

In `@tool/claudecode/file_state.go`:
- Around line 189-203: preserveQuoteStyle is currently applying
applyCurlyDoubleQuotes and applyCurlySingleQuotes to the entire newString
whenever actualOldString contains curly quotes; instead, compute the substring
of newString that corresponds to the replaced region (the span that replaces
oldString/actualOldString), run applyCurlyDoubleQuotes and/or
applyCurlySingleQuotes only on that slice (based on which curly types appear in
actualOldString), then reassemble newString with the converted slice; if you
cannot reliably map the span, add documentation that callers should pass
actualOldString == oldString to opt out. Use the function name
preserveQuoteStyle and the parameters oldString, actualOldString, newString as
the places to implement this change, and keep
applyCurlyDoubleQuotes/applyCurlySingleQuotes for the localized conversion.

In `@tool/claudecode/glob.go`:
- Line 49: Add explicit validation for the untrusted in.Pattern before calling
doublestar.Glob: ensure the pattern is not absolute (no leading '/'), does not
contain any ".." path element, and each path segment passes io/fs.ValidPath (use
strings.Split on '/' and call fs.ValidPath for each segment) and reject/back out
with a clear error if validation fails; update the code around the
doublestar.Glob call (references: in.Pattern, searchDir, doublestar.Glob,
os.DirFS) so only validated patterns are passed to doublestar.Glob.

In `@tool/claudecode/web_fetch.go`:
- Around line 78-116: The loop can hit the redirect cap and currently returns an
empty finalURL; update the final return to include the last attempted URL
(currentURL) and an explicit error so callers can see where it stopped. In the
function handling redirects (the loop using resolveRedirectURL, searchURLHost
and readHTTPBody), change the terminal return from returning "" to returning
currentURL (and the same zeroed status/value slots) with an error such as
fmt.Errorf("too many redirects: last URL %s", currentURL) so the caller gets the
last URL attempted.

In `@tool/claudecode/web_search.go`:
- Around line 61-74: When target.search returns an error, don't construct and
return a partial webSearchOutput; instead check err immediately after calling
target.search and on error return webSearchOutput{} and err. Move or add the err
check right after the call to target.search in the function that builds the
webSearchOutput (the block that currently constructs Results, Query and
DurationSeconds), so you only populate and return webSearchOutput when err ==
nil; this keeps behavior consistent with the webFetch path.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 5e937396-d870-4e30-be15-1d72f49f3fb3

📥 Commits

Reviewing files that changed from the base of the PR and between 84a2710 and 6665afc.

⛔ Files ignored due to path filters (1)
  • tool/claudecode/go.sum is excluded by !**/*.sum
📒 Files selected for processing (25)
  • docs/mkdocs/en/tool.md
  • docs/mkdocs/zh/tool.md
  • tool/claudecode/ask_user.go
  • tool/claudecode/bash.go
  • tool/claudecode/claudecode_test.go
  • tool/claudecode/common.go
  • tool/claudecode/constants.go
  • tool/claudecode/edit.go
  • tool/claudecode/file_state.go
  • tool/claudecode/glob.go
  • tool/claudecode/go.mod
  • tool/claudecode/grep.go
  • tool/claudecode/notebook_edit.go
  • tool/claudecode/options.go
  • tool/claudecode/pdf.go
  • tool/claudecode/process.go
  • tool/claudecode/read.go
  • tool/claudecode/task_output.go
  • tool/claudecode/task_stop.go
  • tool/claudecode/tool_search.go
  • tool/claudecode/toolset.go
  • tool/claudecode/types.go
  • tool/claudecode/web_fetch.go
  • tool/claudecode/web_search.go
  • tool/claudecode/write.go

Comment thread docs/mkdocs/zh/tool.md
Comment thread tool/claudecode/bash.go
Comment thread tool/claudecode/claudecode_test.go
Comment thread tool/claudecode/common.go
Comment thread tool/claudecode/common.go
Comment thread tool/claudecode/read.go
Comment thread tool/claudecode/task_stop.go
Comment thread tool/claudecode/tool_search.go Outdated
Comment thread tool/claudecode/toolset.go
Comment thread tool/claudecode/web_search.go
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tool/claudecode/claudecode_test.go`:
- Around line 1351-1370: This test uses t.Setenv("BASH_DEFAULT_TIMEOUT_MS",
"50") which mutates process-wide env and can cause flakiness when other tests
call bashTimeout concurrently; change the test to stop mutating global env by
passing the timeout via the call path under test (e.g. introduce and use a
WithBashDefaultTimeout option or add a timeout parameter/config struct that
bashTimeout reads) and update TestBashAndProcessHelpersCoverTimeoutAndExitState
to inject the 50ms value via that option instead of t.Setenv; similarly avoid
setting process env in tests that assert envGoogleAPIKey/envGoogleEngineID and
instead inject those values via the backend/config constructor so tests no
longer mutate global environment.
- Around line 139-169: The test TestToolSet_TaskOutputReadsBackgroundBashTask is
timing-sensitive; change the blocking timeout passed to the taskOutput tool from
2_000 to a much larger value (e.g. 10_000) to avoid CI flakes when waiting for
bgOut.BackgroundTaskID, and/or replace the bash Command that uses "sleep 0.3"
with a deterministic signal (e.g. write a marker file then exit, and have the
background task print/read until that file exists) so the assertions on
blocking.RetrievalStatus and blocking.Task.Output are not dependent on short
scheduling windows; update references to bgOut.BackgroundTaskID, taskOutputTool,
and the blocking call accordingly.
- Around line 1305-1315: The test
TestRunLocalRipgrepReturnsFalseWhenRipgrepIsUnavailable should not call
t.Parallel(); remove the t.Parallel() invocation so the test runs serially with
other users of the package-level sync.Mutex used by withRipgrepForTest and avoid
widening the race window against ripgrepCommand (alternatively refactor ripgrep
discovery to accept an injected lookup function rather than mutating package
globals, e.g. change NewToolSet to take a ripgrep lookup dependency).
- Around line 98-169: Several tests (e.g., TestToolSet_BashToolRunsCommand,
TestToolSet_TaskStopStopsBackgroundBashTask,
TestToolSet_TaskOutputReadsBackgroundBashTask) assume POSIX tools (bash,
/bin/sleep, printf, etc.) and will fail on Windows or minimal CI; either mark
the file as Unix-only with a build tag (//go:build unix or !windows) or add
runtime checks at the top of each affected test to call t.Skip when runtime.GOOS
== "windows" or when required binaries are missing (use exec.LookPath for
"bash", "sleep", "pdftoppm", "rg"); update the tests named above to use those
guards (or move them into a _unix_test.go) so Windows CI skips instead of
failing.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 6f7bd393-33d7-4225-8a39-a9d78d503299

📥 Commits

Reviewing files that changed from the base of the PR and between 91ca004 and a120560.

📒 Files selected for processing (1)
  • tool/claudecode/claudecode_test.go

Comment thread tool/claudecode/claudecode_test.go
Comment thread tool/claudecode/claudecode_test.go
Comment thread tool/claudecode/claudecode_test.go
Comment thread tool/claudecode/claudecode_test.go
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tool/claudecode/claudecode_test.go`:
- Around line 902-980: The httptest handler in
TestGoogleSearchBackendSearchUsesConfiguredOptions and
TestGoogleSearchBackendSearchUsesEnvironmentFallback currently uses
require.Equal/require.Empty inside the handler goroutine (via the server's
http.HandlerFunc and requestCount), which can call t.FailNow from a non-test
goroutine; instead capture the request details (e.g., collect r.URL.Query()
values and increment requestCount into a thread-safe slice/channel or struct)
inside the handler without calling require, return the response, then after
backend.search() / unwindowedBackend.search() completes assert those captured
values on the test goroutine (or replace require.* in the handler with assert.*
and surface errors back to the test goroutine to fail there). Ensure you
reference the handler's captured state when asserting and keep
backend.search/unwindowedBackend.search calls unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 0b1caf2d-adb9-4df0-9fa3-b3516a810436

📥 Commits

Reviewing files that changed from the base of the PR and between a120560 and be2b295.

📒 Files selected for processing (1)
  • tool/claudecode/claudecode_test.go

Comment thread tool/claudecode/claudecode_test.go
@Flash-LHR Flash-LHR changed the title {tool/claudecode, docs/mkdocs}: add Claude Code toolset {tool/claudecode, docs}: add Claude Code toolset Apr 20, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
tool/claudecode/web_search.go (2)

60-74: err from target.search is returned alongside partially built results.

When target.search returns (hits, err) with err != nil, the handler still builds results from hits (if any) and returns (webSearchOutput{…, Results: results, …}, err). Callers that follow the usual Go convention of "on error, output is zero/garbage" will either discard real hits or, worse, use both — and the framework's function.FunctionTool serialization path will also serialize a non-empty output with a non-nil error.

Since both current backends only ever return hits or error (never both), this is latent rather than actively broken, but it's a contract landmine for future backends / retries. Suggest explicitly picking one shape:

Suggested fix
 			start := time.Now()
 			hits, err := target.search(ctx, in)
+			if err != nil {
+				return webSearchOutput{}, err
+			}
 			results := make([]webSearchResult, 0, 1)
 			if len(hits) > 0 {
 				results = append(results, webSearchResult{
 					ToolUseID: uuid.NewString(),
 					Content:   hits,
 				})
 			}
 			return webSearchOutput{
 				Query:           in.Query,
 				Results:         results,
 				DurationSeconds: max(time.Since(start).Seconds(), 0.001),
-			}, err
+			}, nil
中文

target.search 返回 (hits, err)err != nil 时,handler 仍会基于 hits 构造 results,最终返回非零的 webSearchOutput 和非 nil 的 err。这违反了 Go 常规的“出错时输出字段应视为无效”约定,下游(包括 function.FunctionTool 的序列化)可能同时看到结果与错误。当前两个后端都只会二选一,所以只是潜在风险;但考虑到后续会加新 provider / 重试逻辑,建议 handler 显式在 err != nil 时直接返回零值输出,避免把契约问题留到后面再修。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/web_search.go` around lines 60 - 74, The handler currently
returns a non-zero webSearchOutput (including webSearchResult built from hits)
alongside a non-nil err from target.search; change the logic in the anonymous
handler so that after calling target.search(ctx, in) you check if err != nil and
immediately return a zero-value webSearchOutput (with DurationSeconds set if
desired) and the err, instead of building/returning results from hits; ensure
the check references target.search, webSearchResult and webSearchOutput so
callers and the function.FunctionTool path never serialize results when an error
is returned.

311-320: Unbounded JSON decode on Google response.

json.NewDecoder(resp.Body).Decode(&decoded) reads the full response body with no cap, unlike the error-branch readHTTPBody(resp, 16*1024, 16*1024) at line 308 and the DDG branch's 512KB cap at line 146. A misbehaving/compromised upstream (or a wrong BaseURL pointing at an arbitrary endpoint, which is user-configurable) can force large allocations on the request goroutine.

Consider bounding the body with io.LimitReader before decoding (Google CSE responses are normally well under a few hundred KB):

Suggested fix
-	if err := json.NewDecoder(resp.Body).Decode(&decoded); err != nil {
+	if err := json.NewDecoder(io.LimitReader(resp.Body, 1<<20)).Decode(&decoded); err != nil {
 		return nil, err
 	}
中文

Google 分支的成功路径直接用 json.NewDecoder(resp.Body).Decode 解码整个响应,没有大小限制;而错误路径和 DDG 分支都有上限(16KB / 512KB)。由于 BaseURL 可由用户配置指向任意端点,恶意或异常上游可以返回超大 body 造成请求协程内的大量分配。建议在解码前用 io.LimitReader 设一个合理上限(例如 1MB),Google CSE 正常响应远小于此。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/web_search.go` around lines 311 - 320, The Google-CSE success
path currently calls json.NewDecoder(resp.Body).Decode(&decoded) without a size
cap, risking large allocations; change this to decode from an
io.LimitReader(resp.Body, <CAP>) (e.g., 1<<20 for 1MB) before calling
json.NewDecoder so the JSON decode is bounded, and keep the existing error
handling; locate the call to json.NewDecoder(resp.Body).Decode in web_search.go
(the decoded struct / decoded variable) and replace the direct resp.Body use
with a limited reader, ensuring you still close resp.Body and return an
appropriate error if the payload exceeds the limit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tool/claudecode/web_search.go`:
- Around line 178-182: The current logic in web_search.go sets
results[len(results)-1].Snippet when encountering either htmlHasClass(node,
"result__snippet") or htmlHasClass(node, "result__body"), which causes
htmlNodeText(node) on the container to collapse title+URL+snippet; update the
condition to only match "result__snippet" (remove "result__body" from the OR),
keep using htmlNodeText(node) for the snippet when present, and do not populate
Snippet otherwise so the Snippet field remains empty if a dedicated
result__snippet element is missing (refer to htmlHasClass, htmlNodeText, and the
results[...] .Snippet assignment).
- Around line 142-152: The DuckDuckGo branch currently calls b.client.Do and
immediately parses HTML via parseDuckDuckGoHTML without checking resp.StatusCode
or ensuring the body is always closed; update the code around the HTTP roundtrip
(the resp, err := b.client.Do(req) block) to: 1) defer resp.Body.Close() as soon
as resp is non-nil so the body is always closed even on panic, 2) read the body
with readHTTPBody, and 3) if resp.StatusCode is not in the 2xx range return a
descriptive error that includes the numeric status and a truncated version of
the body (similar to the Google backend behavior) instead of calling
parseDuckDuckGoHTML, otherwise call parseDuckDuckGoHTML(body, in, b.offset,
b.size) and return its result.
- Around line 208-224: The function normalizeDuckDuckGoResultURL currently
double-decodes the uddg parameter: parsed.Query().Get("uddg") is already
percent-decoded, so calling url.QueryUnescape on it corrupts legitimate %XX
sequences; change normalizeDuckDuckGoResultURL to return the trimmed value from
parsed.Query().Get("uddg") directly (after strings.TrimSpace) and remove the
extra url.QueryUnescape logic so you don't perform a second unescape on uddg.

---

Nitpick comments:
In `@tool/claudecode/web_search.go`:
- Around line 60-74: The handler currently returns a non-zero webSearchOutput
(including webSearchResult built from hits) alongside a non-nil err from
target.search; change the logic in the anonymous handler so that after calling
target.search(ctx, in) you check if err != nil and immediately return a
zero-value webSearchOutput (with DurationSeconds set if desired) and the err,
instead of building/returning results from hits; ensure the check references
target.search, webSearchResult and webSearchOutput so callers and the
function.FunctionTool path never serialize results when an error is returned.
- Around line 311-320: The Google-CSE success path currently calls
json.NewDecoder(resp.Body).Decode(&decoded) without a size cap, risking large
allocations; change this to decode from an io.LimitReader(resp.Body, <CAP>)
(e.g., 1<<20 for 1MB) before calling json.NewDecoder so the JSON decode is
bounded, and keep the existing error handling; locate the call to
json.NewDecoder(resp.Body).Decode in web_search.go (the decoded struct / decoded
variable) and replace the direct resp.Body use with a limited reader, ensuring
you still close resp.Body and return an appropriate error if the payload exceeds
the limit.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: ae6cdd5e-3920-4674-a38c-91c0af9dfe6e

📥 Commits

Reviewing files that changed from the base of the PR and between be2b295 and d4a9d66.

📒 Files selected for processing (5)
  • docs/mkdocs/zh/tool.md
  • tool/claudecode/claudecode_test.go
  • tool/claudecode/task_stop.go
  • tool/claudecode/tool_search.go
  • tool/claudecode/web_search.go
✅ Files skipped from review due to trivial changes (3)
  • tool/claudecode/tool_search.go
  • docs/mkdocs/zh/tool.md
  • tool/claudecode/claudecode_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • tool/claudecode/task_stop.go

Comment thread tool/claudecode/web_search.go
Comment thread tool/claudecode/web_search.go Outdated
Comment thread tool/claudecode/web_search.go
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tool/claudecode/web_search.go`:
- Around line 289-294: The Google Custom Search mapping currently sets the
"start" parameter directly from b.options.Offset which is zero-based in our
other backends; change the mapping so that when setting query.Set("start", ...)
you use strconv.Itoa(b.options.Offset+1) (i.e., map Offset → Offset+1) so
Offset=0→start=1 and Offset=1→start=2, and also clamp b.options.Size to Google's
max (10) before setting query.Set("num", ...) to avoid requesting more than
allowed; update the logic around query.Set("num", strconv.Itoa(b.options.Size))
and query.Set("start", ...) accordingly (refer to b.options.Size,
b.options.Offset, and the query.Set calls).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 6e83ee41-b9bb-4f3d-a67d-662a970a03d5

📥 Commits

Reviewing files that changed from the base of the PR and between d4a9d66 and ed32bcb.

📒 Files selected for processing (3)
  • tool/claudecode/claudecode_test.go
  • tool/claudecode/notebook_edit.go
  • tool/claudecode/web_search.go
✅ Files skipped from review due to trivial changes (1)
  • tool/claudecode/notebook_edit.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • tool/claudecode/claudecode_test.go

Comment thread tool/claudecode/web_search.go
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
tool/claudecode/web_search.go (1)

289-294: ⚠️ Potential issue | 🟡 Minor

Clamp the Google request window before sending num / start.

Google Custom Search rejects requests where num > 10 or start + num > 100 per the official cse.list docs, so Size=20 or Offset=95 with the default page size turns a valid tool option into a provider error. Clamp the provider-specific request window while preserving the zero-based Offset contract.

Suggested fix
+	const googleMaxPageSize = 10
+	const googleMaxResults = 100
+	offset := max(0, b.options.Offset)
+	if offset >= googleMaxResults {
+		return nil, nil
+	}
+	size := b.options.Size
+	if size <= 0 || size > googleMaxPageSize {
+		size = googleMaxPageSize
+	}
+	if remaining := googleMaxResults - offset; size > remaining {
+		size = remaining
+	}
-	if b.options.Size > 0 {
-		query.Set("num", strconv.Itoa(b.options.Size))
-	}
-	if b.options.Offset > 0 {
-		query.Set("start", strconv.Itoa(b.options.Offset+1))
-	}
+	query.Set("num", strconv.Itoa(size))
+	if offset > 0 {
+		query.Set("start", strconv.Itoa(offset+1))
+	}

Source: Google Custom Search JSON API cse.list reference: https://developers.google.com/custom-search/v1/reference/rest/v1/cse/list
As per coding guidelines, framework code should prioritize semantic and behavioral compatibility plus resource boundaries.

中文

在发送 Google num / start 前先裁剪请求窗口。

根据 Google Custom Search 官方 cse.list 文档,num > 10start + num > 100 会导致请求失败。因此 Size=20,或 Offset=95 搭配默认页大小时,会把本应有效的工具配置变成 provider 错误。建议在 Google 分支内做 provider-specific 的窗口裁剪,同时保留当前零基 Offset 语义。

来源:Google Custom Search JSON API cse.list 参考文档:https://developers.google.com/custom-search/v1/reference/rest/v1/cse/list
As per coding guidelines, framework code should prioritize semantic and behavioral compatibility plus resource boundaries.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tool/claudecode/web_search.go` around lines 289 - 294, Clamp the
provider-specific Google request window before calling query.Set: compute num :=
min(b.options.Size, 10) and start := b.options.Offset + 1 (preserving the
zero-based Offset contract), ensure start >= 1, then if start + num > 100 reduce
num to max(1, 100 - start + 1); finally call query.Set("num", strconv.Itoa(num))
and query.Set("start", strconv.Itoa(start)) in the Google branch (referencing
b.options.Size, b.options.Offset and the query.Set calls in web_search.go).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tool/claudecode/web_search.go`:
- Around line 289-294: Clamp the provider-specific Google request window before
calling query.Set: compute num := min(b.options.Size, 10) and start :=
b.options.Offset + 1 (preserving the zero-based Offset contract), ensure start
>= 1, then if start + num > 100 reduce num to max(1, 100 - start + 1); finally
call query.Set("num", strconv.Itoa(num)) and query.Set("start",
strconv.Itoa(start)) in the Google branch (referencing b.options.Size,
b.options.Offset and the query.Set calls in web_search.go).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: cfadc8b1-0e26-4a8d-8f58-61030618d415

📥 Commits

Reviewing files that changed from the base of the PR and between ed32bcb and 385abdc.

📒 Files selected for processing (2)
  • tool/claudecode/claudecode_test.go
  • tool/claudecode/web_search.go
✅ Files skipped from review due to trivial changes (1)
  • tool/claudecode/claudecode_test.go

Copy link
Copy Markdown
Collaborator

@sandyskies sandyskies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -0,0 +1,379 @@
//
// Tencent is pleased to support the open source community by making
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NotebookEdit 这个什么用呢?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jupyter Notebook的编辑工具

Copy link
Copy Markdown
Collaborator Author

@Flash-LHR Flash-LHR Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image 补充了文档

@@ -0,0 +1,125 @@
//
// Tencent is pleased to support the open source community by making
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

task相关的tool,是不是在claude code的场景有用,单独拎出来是否有用?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

有意义
TaskStop:停掉一个还在跑的后台任务
TaskOutput:读取后台任务输出

Copy link
Copy Markdown
Collaborator

@sandyskies sandyskies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sandyskies sandyskies merged commit ed1042d into main Apr 24, 2026
167 checks passed
@github-actions github-actions Bot locked and limited conversation to collaborators Apr 24, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants