Skip to content

fix(website-server): Remove @google-cloud/logging-winston to fix gRPC bundling issues#1074

Merged
yamadashy merged 4 commits intomainfrom
fix/website-server-console-logging
Jan 4, 2026
Merged

fix(website-server): Remove @google-cloud/logging-winston to fix gRPC bundling issues#1074
yamadashy merged 4 commits intomainfrom
fix/website-server-console-logging

Conversation

@yamadashy
Copy link
Copy Markdown
Owner

Summary

  • Remove @google-cloud/logging-winston dependency that caused gRPC bundling issues with Rolldown
  • Replace with Console transport + Cloud Logging severity mapping (Cloud Run automatically sends stdout to Cloud Logging)
  • Add trace context extraction for distributed tracing correlation
  • Implement code splitting with separate worker entry point for smaller worker bundles
  • Copy web-tree-sitter.wasm to dist-bundled for Compress feature support

Checklist

  • Run npm run test
  • Run npm run lint

… bundling issues

Replace @google-cloud/logging-winston with Console transport and Cloud Logging
severity mapping. Cloud Run automatically sends stdout to Cloud Logging, so
a dedicated transport is unnecessary.

Changes:
- Remove @google-cloud/logging-winston dependency
- Add severity mapping for Cloud Logging compatibility
- Add trace context extraction for distributed tracing
- Implement code splitting for server and worker bundles
- Add worker-entry.ts for minimal worker bundle
- Simplify Dockerfile to only copy tinypool and tiktoken
web-tree-sitter looks for the WASM file in the same directory as the
JavaScript file. Without this, the Compress feature fails with ENOENT.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Jan 4, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

The PR restructures the server bundling and worker architecture: it introduces a separate lightweight worker entry point, replaces Tinypool worker logic with a unified worker handler, migrates logging from Cloud Logging transport to Console-based transport (while maintaining Cloud Run trace context), refactors the bundling pipeline to support multiple entry points with code-splitting and WASM collection, and updates deployment configuration to reflect new build artifacts.

Changes

Cohort / File(s) Summary
Ignore and Configuration Files
website/server/.dockerignore, website/server/.gcloudignore, website/server/.gitignore
Adds new ignore patterns: dist-bundled, .compile-cache in .dockerignore and .gitignore; extends .gcloudignore with markdown artifacts (README.md, CHANGELOG.md, LICENSE, *.md).
Logging Infrastructure
website/server/src/utils/logger.ts, website/server/src/middlewares/cloudLogger.ts
Replaces Google Cloud Logging Winston transport with Console transport; adds Cloud Logging severity formatter for production JSON logs. Introduces extractTraceContext() to parse X-Cloud-Trace-Context header and inject trace/spanId into all log entries.
Worker and Server Entry Points
website/server/src/worker-entry.ts, website/server/src/index.ts
New lightweight worker entry point re-exports unifiedWorkerHandler and onWorkerTermination from repomix. Removes Tinypool worker detection logic and re-exports from main server entry; simplifies initialization guard to check warmup mode only.
Bundling Pipeline
website/server/scripts/bundle.mjs
Introduces cleanDistBundled() and getFileSizeMB() helpers; replaces single Rolldown bundle with multi-entry bundleAll() supporting code-splitting via shared chunk groups; adds WASM file collection (web-tree-sitter.wasm root-level copy and language WASM files); updates output naming to [name].mjs and [name]-[hash].mjs.
Deployment and Runtime Configuration
website/server/Dockerfile, website/server/package.json, website/server/warmup.mjs
Updates Dockerfile comments and changes REPOMIX_WORKER_PATH to /app/dist-bundled/worker.mjs; removes @google-cloud/logging-winston dependency. Refactors warmup to loop over all .mjs files in dist-bundled and import each; adds explicit process.exit(0) to ensure clean termination.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • yamadashy

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the primary change: removing @google-cloud/logging-winston to fix gRPC bundling issues. It is specific, concise, and directly reflects the main motivation for the PR.
Description check ✅ Passed The description is comprehensive and covers all major changes, including dependency removal, logging replacement, trace context extraction, code splitting, and WASM handling. Checklist items are marked complete as required by the template.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @yamadashy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses critical bundling issues and enhances the server's operational efficiency and observability. By removing a problematic logging dependency and adopting a more streamlined logging approach compatible with Cloud Run, it resolves gRPC conflicts. Furthermore, it introduces distributed tracing capabilities for better request correlation and implements code splitting to optimize worker bundle sizes, leading to improved startup times and reduced memory footprint. The changes also ensure proper handling of WebAssembly files crucial for certain features.

Highlights

  • Logging Dependency Removal: The @google-cloud/logging-winston dependency has been removed to resolve gRPC bundling conflicts with Rolldown, streamlining the build process.
  • Enhanced Cloud Logging Integration: Logging now utilizes a Console transport with direct Cloud Logging severity mapping, leveraging Cloud Run's automatic ingestion of stdout for a more efficient and native logging experience.
  • Distributed Tracing Correlation: Trace context extraction from X-Cloud-Trace-Context headers has been implemented, allowing for better correlation of logs with distributed traces in Cloud Logging.
  • Optimized Worker Bundling with Code Splitting: The bundling process now employs code splitting to generate a separate, minimal worker.mjs entry point. This significantly reduces worker bundle size, improving startup times and memory usage for tinypool workers.
  • WASM File Handling Improvement: The web-tree-sitter.wasm file is now explicitly copied to the dist-bundled directory, ensuring it is correctly located and loaded for features that rely on WebAssembly.
  • Build Configuration Updates: Relevant ignore files (.dockerignore, .gcloudignore, .gitignore) and the Dockerfile have been updated to reflect the new dist-bundled output structure and worker entry point.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link
Copy Markdown

codecov bot commented Jan 4, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 87.19%. Comparing base (4650cb0) to head (2928e2b).
⚠️ Report is 5 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1074   +/-   ##
=======================================
  Coverage   87.19%   87.19%           
=======================================
  Files         116      116           
  Lines        4350     4350           
  Branches     1011     1011           
=======================================
  Hits         3793     3793           
  Misses        557      557           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Jan 4, 2026

Deploying repomix with  Cloudflare Pages  Cloudflare Pages

Latest commit: 2928e2b
Status: ✅  Deploy successful!
Preview URL: https://b0a5fcbd.repomix.pages.dev
Branch Preview URL: https://fix-website-server-console-l.repomix.pages.dev

View logs

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves the gRPC bundling issue by removing the @google-cloud/logging-winston dependency and replacing it with a custom Winston console transport that maps to Cloud Logging severities. The implementation is solid. Additionally, the introduction of code splitting for worker bundles is a great performance optimization, and the inclusion of trace context extraction significantly improves observability. I've left a few minor suggestions to improve code clarity and performance, but overall, this is a high-quality contribution.

Comment on lines +59 to +60
...(traceContext.trace && { 'logging.googleapis.com/trace': traceContext.trace }),
...(traceContext.spanId && { 'logging.googleapis.com/spanId': traceContext.spanId }),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to add trace context fields is duplicated in three places (request start, completion, and failure logs on lines 82-83 and 105-106). To improve maintainability and reduce code duplication, consider defining these fields once and reusing them.

For example:

const traceContext = extractTraceContext(c);
const traceFields = {
  ...(traceContext.trace && { 'logging.googleapis.com/trace': traceContext.trace }),
  ...(traceContext.spanId && { 'logging.googleapis.com/spanId': traceContext.spanId }),
};

// Then in your log calls:
logger.info({
  // ... other fields
  ...traceFields,
});

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Jan 4, 2026

Code Review

Overall, this is a well-structured PR that addresses a real bundling issue with gRPC/logging dependencies. The approach of using Console transport + Cloud Logging severity mapping is the recommended pattern for Cloud Run environments.

Summary

What this PR does:

  • Removes @google-cloud/logging-winston to fix gRPC bundling issues with Rolldown
  • Replaces with Console transport that maps to Cloud Logging severity levels
  • Adds trace context extraction for distributed tracing
  • Implements code splitting with separate worker entry point
  • Updates build configuration for the new bundled structure

Code Quality & Best Practices

Positive:

  • Clean separation of concerns with the new cloudLogger.ts middleware
  • Good use of Cloud Logging structured fields (logging.googleapis.com/trace, httpRequest)
  • Minimal worker entry point (worker-entry.ts) is well-designed for reducing bundle size
  • Proper code splitting configuration in bundle.mjs
Details

The logger implementation follows Cloud Logging best practices by:

  1. Using the severity field that Cloud Logging recognizes
  2. Including httpRequest structured payloads for request logging
  3. Supporting trace correlation with X-Cloud-Trace-Context header parsing

Potential Issues

Minor items to consider
  1. cloudLogger.ts:16 - substr is deprecated: Consider using substring instead of substr:

    // Current
    Math.random().toString(36).substr(2, 9)
    // Suggested
    Math.random().toString(36).substring(2, 11)
  2. .gcloudignore and .dockerignore - duplicate .git entry: The .dockerignore file has .git listed twice (lines 15 and 69). This is harmless but slightly redundant.

  3. Warmup script process.exit(0): The comment explains why this is needed (winston keeping event loop alive), which is good documentation. Just ensure this doesn't mask any unhandled promise rejections during warmup.

Security Considerations

  • No sensitive data exposure in logging
  • Request IDs use sufficient entropy with Date.now() + random string
  • No credentials or secrets in the changed files

Performance Considerations

Positive:

  • Code splitting should reduce worker startup time and memory
  • Removing gRPC dependency significantly reduces bundle size (noted in PR: -1186 lines from package-lock.json)
  • Compile cache generation at build time is a good optimization

Test Coverage

According to Codecov, coverage is maintained at 87.19%. The website server code appears to be outside the main test coverage scope, which is acceptable for infrastructure code.

Premortem Analysis

Failure Scenario Risk Level Mitigation
Trace context parsing fails Low Graceful fallback - returns empty object if header missing/malformed
Log severity mapping missing Low Falls back to 'DEFAULT' severity
WASM file not found during build Medium Warning is logged, but could fail at runtime - consider making this a build error
Worker entry point not found High Dockerfile uses hardcoded path /app/dist-bundled/worker.mjs - ensure this matches bundle output

Recommendation

Approve - This is a well-executed fix for the bundling issue. The code is clean, follows Cloud Run best practices, and the architectural decisions (code splitting, Console transport) are sound.


🤖 Generated with Claude Code

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Jan 4, 2026

Code Review - Additional Analysis

I've reviewed this PR after Claude's initial review. Here are some additional observations:

Overall Assessment

This is a well-executed PR that addresses the gRPC bundling issues by removing @google-cloud/logging-winston and implementing a Cloud Run-compatible logging solution. The approach aligns with Google's recommended pattern for Cloud Run services.

Key Strengths

  1. Clean architecture: The new cloudLoggerMiddleware.ts properly separates request logging concerns with trace context extraction
  2. Code splitting: The separate worker-entry.ts is minimal and focused, which should significantly reduce worker bundle size
  3. Cloud Logging compatibility: The severity mapping and structured logging fields follow Google's best practices
  4. WASM file handling: Properly copies web-tree-sitter.wasm to ensure Compress feature works

Minor Issues Identified

1. Deprecated method usage in cloudLogger.ts:16
// Current (deprecated)
Math.random().toString(36).substr(2, 9)

// Suggested (modern)
Math.random().toString(36).substring(2, 11)

String.prototype.substr() is deprecated. While it still works, substring() is the recommended alternative.

2. Duplicate `.git` entry in .dockerignore

Lines 15 and 69 both list .git. This is harmless but could be cleaned up:

  • Line 15: .git (under "Development files")
  • Line 69: .git (under "Misc")

Security Review

  • No security concerns identified
  • Request IDs use sufficient entropy (Date.now() + random string)
  • No credentials or secrets exposed in logging

Premortem Analysis

Scenario Risk Mitigation
Malformed trace header Low Gracefully returns empty object
Missing severity level Low Falls back to 'DEFAULT'
web-tree-sitter.wasm missing at build Medium Warning logged; could fail at runtime for Compress feature
Worker file not found High Dockerfile path must match bundle output exactly

Recommendation

Approve ✅ - The PR is well-implemented and addresses the stated goals effectively. The minor issues noted above are non-blocking.


🤖 Generated with Claude Code

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
website/server/src/utils/logger.ts (2)

4-15: Move severityMap outside the formatter function.

The severityMap is currently recreated on every log entry, which is inefficient. Define it as a module-level constant to avoid repeated allocations.

🔎 Proposed refactor
+// Map winston levels to Cloud Logging severity levels
+// https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#LogSeverity
+const SEVERITY_MAP: Record<string, string> = {
+  error: 'ERROR',
+  warn: 'WARNING',
+  info: 'INFO',
+  debug: 'DEBUG',
+} as const;
+
-// Map winston levels to Cloud Logging severity levels
-// https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#LogSeverity
 const cloudLoggingSeverity = winston.format((info) => {
-  const severityMap: Record<string, string> = {
-    error: 'ERROR',
-    warn: 'WARNING',
-    info: 'INFO',
-    debug: 'DEBUG',
-  };
-  info.severity = severityMap[info.level] || 'DEFAULT';
+  info.severity = SEVERITY_MAP[info.level] || 'DEFAULT';
   return info;
 });

21-29: Consider applying severity mapping to both environments.

The cloudLoggingSeverity formatter is currently only applied in production. For consistency and to make local development logs more representative of production, consider applying it to both environments.

🔎 Proposed refactor
   const transports: winston.transport[] = [
     new winston.transports.Console({
       format: isProduction
-        ? winston.format.combine(cloudLoggingSeverity(), winston.format.json())
-        : winston.format.combine(winston.format.timestamp(), winston.format.json()),
+        ? winston.format.combine(cloudLoggingSeverity(), winston.format.json())
+        : winston.format.combine(winston.format.timestamp(), cloudLoggingSeverity(), winston.format.json()),
     }),
   ];
website/server/src/middlewares/cloudLogger.ts (3)

14-17: Replace deprecated substr() with slice().

Line 16 uses the deprecated substr() method. Modern JavaScript should use substring() or slice() instead.

🔎 Proposed fix
 function generateRequestId(): string {
-  return `req-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
+  return `req-${Date.now()}-${Math.random().toString(36).slice(2, 11)}`;
 }

19-35: Add validation for empty trace components.

After splitting the trace header on line 27, traceId or spanId could be empty strings if the header is malformed (e.g., "//;o=1" or "/span123;o=1"). Empty strings are truthy and would pass the check on line 29, potentially logging invalid trace data to Cloud Logging.

🔎 Proposed fix with empty string validation
 function extractTraceContext(c: Context): { trace?: string; spanId?: string } {
   const traceHeader = c.req.header('x-cloud-trace-context');
   if (!traceHeader) return {};
 
   const projectId = process.env.GOOGLE_CLOUD_PROJECT || process.env.GCP_PROJECT;
   const [traceSpan] = traceHeader.split(';');
   const [traceId, spanId] = traceSpan.split('/');
 
-  if (!traceId) return {};
+  if (!traceId || traceId.trim() === '') return {};
 
   return {
     trace: projectId ? `projects/${projectId}/traces/${traceId}` : traceId,
-    spanId,
+    ...(spanId && spanId.trim() !== '' && { spanId }),
   };
 }

59-60: Consider extracting trace context spreading into a helper.

The trace context spreading logic is duplicated three times (request start, completion, and error logs). Extracting it into a helper function would reduce repetition and improve maintainability.

🔎 Proposed refactor

Add a helper function after extractTraceContext:

// Helper to build trace context fields for Cloud Logging
function buildTraceFields(traceContext: { trace?: string; spanId?: string }) {
  return {
    ...(traceContext.trace && { 'logging.googleapis.com/trace': traceContext.trace }),
    ...(traceContext.spanId && { 'logging.googleapis.com/spanId': traceContext.spanId }),
  };
}

Then replace the three occurrences with:

     logger.info({
       message: `${method} ${url.pathname} started`,
       requestId,
-      // Cloud Logging trace correlation field
-      ...(traceContext.trace && { 'logging.googleapis.com/trace': traceContext.trace }),
-      ...(traceContext.spanId && { 'logging.googleapis.com/spanId': traceContext.spanId }),
+      ...buildTraceFields(traceContext),
       httpRequest: {

Apply similar changes to lines 82-83 and 105-106.

Also applies to: 82-83, 105-106

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4650cb0 and c839071.

⛔ Files ignored due to path filters (1)
  • website/server/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (11)
  • website/server/.dockerignore
  • website/server/.gcloudignore
  • website/server/.gitignore
  • website/server/Dockerfile
  • website/server/package.json
  • website/server/scripts/bundle.mjs
  • website/server/src/index.ts
  • website/server/src/middlewares/cloudLogger.ts
  • website/server/src/utils/logger.ts
  • website/server/src/worker-entry.ts
  • website/server/warmup.mjs
💤 Files with no reviewable changes (1)
  • website/server/package.json
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-01-27T08:22:19.918Z
Learnt from: boralg
Repo: yamadashy/repomix PR: 318
File: src/config/defaultIgnore.ts:136-138
Timestamp: 2025-01-27T08:22:19.918Z
Learning: In Rust projects, `cargo-timing*.html` files are generated within the `target/` directory, so they're automatically covered by the `**/target/**` ignore pattern.

Applied to files:

  • website/server/.dockerignore
🧬 Code graph analysis (2)
website/server/src/utils/logger.ts (1)
src/shared/logger.ts (1)
  • info (52-56)
website/server/src/middlewares/cloudLogger.ts (1)
website/server/src/utils/logger.ts (1)
  • logger (38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (16)
website/server/src/middlewares/cloudLogger.ts (1)

51-52: LGTM!

Extracting the trace context once and reusing it across multiple log statements is efficient and follows good practices for Cloud Run distributed tracing.

website/server/.gitignore (1)

4-4: LGTM!

The .compile-cache entry correctly ignores the V8 compile cache directory generated by the warmup script, aligning with the build optimization strategy.

website/server/.gcloudignore (1)

1-71: LGTM!

The comprehensive ignore patterns are well-organized and properly exclude non-runtime artifacts (documentation, build outputs, development files) from cloud deployments, reducing artifact size and improving deployment efficiency.

website/server/warmup.mjs (2)

33-35: LGTM!

The explicit process.exit(0) ensures clean termination, preventing event loop handlers from keeping the process alive after warmup completes.


19-26: The warmup pattern is appropriate for this codebase. The dist-bundled/ directory contains only intentional entry point bundles generated by Rolldown (server.mjs and worker.mjs, plus optional shared chunks), not arbitrary modules. Since these entry points have the WARMUP_MODE guard in place (visible in index.ts lines 14-20), importing all .mjs files is safe—the server initialization and any logging only occur when WARMUP_MODE is not set.

website/server/src/worker-entry.ts (1)

1-14: LGTM!

The minimal worker entry point cleanly separates worker concerns from server dependencies, reducing bundle size and improving worker initialization performance. The re-export pattern is straightforward and well-documented.

website/server/src/index.ts (1)

19-20: LGTM! Simplified initialization guard aligns with worker separation.

The removal of isTinypoolWorker() check is appropriate given the new dedicated worker entry point (worker-entry.ts). The server initialization now only guards against warmup mode, as workers will use their own entry point.

To ensure the Tinypool configuration correctly references the new worker entry point, run:

#!/bin/bash
# Verify Tinypool is configured to use the new worker entry point
echo "=== Checking REPOMIX_WORKER_PATH references ==="
rg -n "REPOMIX_WORKER_PATH" -A 2 -B 2

echo ""
echo "=== Checking Tinypool configuration ==="
rg -n "Tinypool|tinypool" -A 5 -B 2 --type=ts --type=js
website/server/scripts/bundle.mjs (6)

23-32: LGTM!

The cleanup logic is correct and appropriate for a build script. Using rmSync with recursive: true safely handles non-empty directories, and mkdirSync with recursive: true ensures the directory is created.


66-74: Verify the code-splitting output structure.

The advancedChunks configuration is designed to create a shared chunk for code used by both the server and worker entries. Given that Rolldown is in beta (1.0.0-beta.58), please verify that:

  1. The worker bundle is significantly smaller than the server bundle
  2. A shared chunk is generated containing common dependencies
  3. Both bundles execute correctly with the shared chunk loaded

This can be manually verified by inspecting the bundle sizes reported during the build and testing both entry points.


85-91: LGTM!

The file size calculation is correct, and using statSync is appropriate for a build script context. The function provides clear human-readable output for logging.


104-112: LGTM!

The logic to copy web-tree-sitter.wasm to the dist-bundled root is correct and necessary, as explained in the comment. The existence check and warning provide appropriate feedback if the file is missing.


133-147: LGTM!

The build process flow is well-structured: cleaning first prevents stale artifacts, and the sequential execution order is appropriate. The top-level error handling ensures failures are caught and reported.


48-62: No verification needed—the bundled output already executes successfully in Node.js.

The Dockerfile confirms the bundled server.mjs is deployed to production and executed directly via node dist-bundled/server.mjs. A warmup script (RUN node warmup.mjs) pre-compiles the bundled code at build time, proving Node.js ESM compatibility is intact. The banner removal does not break Node.js execution.

website/server/.dockerignore (1)

7-12: LGTM!

The additions of dist-bundled and .compile-cache are appropriate and align with the new build artifacts generated by the bundle script and the compile cache directory used in the Dockerfile.

website/server/Dockerfile (2)

33-37: LGTM!

The updated comments clearly explain why tinypool and tiktoken cannot be bundled, which improves maintainability. The inline explanations about file paths and WASM loading are helpful.


46-46: Path change is correct and properly implemented.

The bundle script generates worker.mjs from the worker-entry.ts entry point, which contains only minimal worker code (exporting the unified worker handler from repomix without server dependencies). The tinypool configuration correctly references REPOMIX_WORKER_PATH at runtime, and the Dockerfile properly sets it to /app/dist-bundled/worker.mjs.

@yamadashy yamadashy merged commit db36537 into main Jan 4, 2026
57 checks passed
@yamadashy yamadashy deleted the fix/website-server-console-logging branch January 4, 2026 12:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant