Skip to content

perf(core): Replace ZIP archive with streaming tar.gz extraction#1153

Merged
yamadashy merged 5 commits intomainfrom
refactor/replace-zip-with-tar-gz-streaming
Feb 23, 2026
Merged

perf(core): Replace ZIP archive with streaming tar.gz extraction#1153
yamadashy merged 5 commits intomainfrom
refactor/replace-zip-with-tar-gz-streaming

Conversation

@yamadashy
Copy link
Copy Markdown
Owner

@yamadashy yamadashy commented Feb 14, 2026

The previous ZIP-based archive download used fflate's in-memory extraction, which failed on large repositories (e.g. facebook/react) due to memory constraints and ZIP64 limitations.

This PR switches to tar.gz format with Node.js built-in zlib + tar package, enabling a full streaming pipeline:

HTTP response → gunzip → tar extract → disk

Key changes

  • Replace fflate with tar package — removes fflate from main package.json (website retains its own dependency)
  • Change archive format from .zip to .tar.gz — GitHub supports both formats
  • Full streaming extraction — no temporary archive files, constant memory usage regardless of repo size
  • Simplified codetar.extract({ strip: 1 }) handles prefix removal and path traversal protection built-in
  • Remove unused getArchiveFilename — streaming extraction doesn't use temp files, so filename generation is unnecessary
  • Remove Bun worker_threads workaround — the hang was caused by fileCollect workers which have been migrated to a promise pool in perf(core): Optimize file collection with UTF-8 fast path and promise pool #1155, making this workaround unnecessary

Before vs After

Before (ZIP + fflate) After (tar.gz + zlib)
Temp files Required None
Memory Entire archive in memory Streaming (constant)
Large repos Fails (fflate error) Works
I/O passes 2 (download + read temp) 1 (direct streaming)
Code lines ~340 ~190

Checklist

  • Run npm run test
  • Run npm run lint

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 14, 2026

📝 Walkthrough

Walkthrough

This PR migrates GitHub archive extraction from ZIP-based in-memory/temporary file handling to streaming tar.gz format with gunzip decompression. Archive URLs are updated from .zip to .tar.gz pattern, and the extraction pipeline is refactored to use streaming decomposition and tar extraction to disk.

Changes

Cohort / File(s) Summary
Dependency updates
package.json
Replaced fflate with tar dependency to support tar.gz extraction instead of ZIP handling.
Archive extraction implementation
src/core/git/gitHubArchive.ts
Rewrote extraction pipeline from ZIP-based approach to streaming tar.gz: HTTP response → gunzip decompression → tar extraction. Introduced ArchiveDownloadDeps dependency injection interface and defaultDeps object. Removed legacy ZIP extraction helpers and temporary file handling. Updated downloadGitHubArchive and downloadAndExtractArchive signatures to accept deps parameter.
Archive URL generation
src/core/git/gitHubArchiveApi.ts
Updated all GitHub archive URL constructors to return .tar.gz URLs (HEAD, commit SHAs, branches, tags) instead of .zip. Updated getArchiveFilename fallback extension and documentation comments.
Archive extraction tests
tests/core/git/gitHubArchive.test.ts
Refactored test mocks from ZIP-specific behavior to gzip tarball streaming pipeline. Replaced ZIP mock data and fs operations with streaming-based mocks (tarExtract, createGunzip). Updated URL expectations and error handling assertions to reflect tar.gz extraction flow.
Archive URL tests
tests/core/git/gitHubArchiveApi.test.ts
Updated test assertions from .zip to .tar.gz file extensions across all archive URL builder tests. No logic or control flow changes.

Sequence Diagram

sequenceDiagram
    participant Client as Client
    participant HTTP as HTTP Fetch
    participant Gunzip as Gunzip Stream
    participant TarExtract as Tar Extractor
    participant FileSystem as File System

    Client->>HTTP: Request tar.gz archive URL
    HTTP-->>Gunzip: Streaming compressed data
    Gunzip->>Gunzip: Decompress gzip
    Gunzip-->>TarExtract: Streaming decompressed tar data
    TarExtract->>TarExtract: Parse tar entries
    TarExtract->>FileSystem: Extract files to target directory
    FileSystem-->>TarExtract: Write complete
    TarExtract-->>Client: Extraction complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • #1149: Modifies src/core/git/gitHubArchive.ts for archive extraction and retry/error-handling behavior in the same file.
  • #806: Modifies src/core/git/gitHubArchiveApi.ts archive URL and filename logic in the same functions being changed here.
🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main
Title check ✅ Passed The title accurately describes the main architectural change: replacing ZIP archive extraction with streaming tar.gz extraction, which is the central focus across all modified files.
Description check ✅ Passed Pull request description is comprehensive, well-structured, includes rationale, key changes, before/after comparison, and completed checklist items.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refactor/replace-zip-with-tar-gz-streaming

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @yamadashy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the GitHub repository archive download and extraction mechanism. The previous ZIP-based approach, which relied on fflate and in-memory extraction, was prone to failures with large repositories due to memory constraints and ZIP64 limitations. The new implementation transitions to a tar.gz format, utilizing Node.js's built-in zlib and the tar package to enable a robust, full streaming pipeline. This change improves reliability, reduces memory footprint, and streamlines the codebase for archive handling.

Highlights

  • Dependency Change: Replaced the fflate package with the tar package for archive handling.
  • Archive Format: Switched the GitHub archive format from .zip to .tar.gz.
  • Streaming Extraction: Implemented a full streaming pipeline for archive extraction, eliminating temporary files and ensuring constant memory usage.
  • Code Simplification: Simplified the archive extraction code by leveraging tar.extract({ strip: 1 }) for prefix removal and path traversal protection.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • package-lock.json
    • Removed fflate dependency and its related entries.
    • Added tar dependency and its sub-dependencies (@isaacs/fs-minipass, chownr, minipass, minizlib, yallist).
  • package.json
    • Removed fflate from project dependencies.
    • Added tar to project dependencies.
  • src/core/git/gitHubArchive.ts
    • Removed imports for node:fs, node:fs/promises, node:path, and fflate.
    • Added imports for node:zlib and tar.
    • Refactored downloadGitHubArchive to remove fs dependencies and adapt to streaming extraction.
    • Rewrote downloadAndExtractArchive to use a streaming pipeline (HTTP response -> progress -> gunzip -> tar extract) directly to disk.
    • Removed downloadFile, extractZipArchive, extractZipArchiveInMemory, and processExtractedFiles functions, which were part of the old ZIP extraction logic.
  • src/core/git/gitHubArchiveApi.ts
    • Updated buildGitHubArchiveUrl, buildGitHubMasterArchiveUrl, and buildGitHubTagArchiveUrl to generate .tar.gz archive URLs.
    • Modified getArchiveFilename to return filenames with the .tar.gz extension.
  • tests/core/git/gitHubArchive.test.ts
    • Updated imports to remove fs related types and add zlib and tar types.
    • Modified MockDeps interface to reflect the new tarExtract and createGunzip dependencies, removing fs and createWriteStream.
    • Updated beforeEach to mock tarExtract and createGunzip instead of fs and fflate.unzip.
    • Adjusted test cases to assert calls to mockTarExtract and mockPipeline instead of mockFs and mockUnzip.
    • Updated expected archive URLs in tests to use .tar.gz.
    • Removed tests specific to ZIP extraction (path traversal, absolute paths, temporary ZIP file cleanup).
  • tests/core/git/gitHubArchiveApi.test.ts
    • Updated all assertions for archive URLs and filenames to expect .tar.gz instead of .zip.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link
Copy Markdown

codecov bot commented Feb 14, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 87.14%. Comparing base (756ce73) to head (2e98b5c).
⚠️ Report is 6 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1153   +/-   ##
=======================================
  Coverage   87.13%   87.14%           
=======================================
  Files         115      115           
  Lines        4377     4310   -67     
  Branches     1016      998   -18     
=======================================
- Hits         3814     3756   -58     
+ Misses        563      554    -9     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Feb 14, 2026

Deploying repomix with  Cloudflare Pages  Cloudflare Pages

Latest commit: 2e98b5c
Status: ✅  Deploy successful!
Preview URL: https://51477703.repomix.pages.dev
Branch Preview URL: https://refactor-replace-zip-with-ta.repomix.pages.dev

View logs

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an excellent refactoring that replaces the in-memory ZIP extraction with a full streaming pipeline for tar.gz archives. This change effectively resolves the memory consumption issues and limitations with large repositories. The switch to the tar package not only enables efficient streaming but also simplifies the codebase significantly by leveraging its built-in features for directory stripping and path traversal protection. The code is cleaner, more robust, and the accompanying test updates are thorough and well-adapted to the new implementation. Great work!

devin-ai-integration[bot]

This comment was marked as resolved.

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Feb 14, 2026

Code Review — Replace ZIP archive with streaming tar.gz extraction

This is a well-motivated refactoring that replaces the in-memory ZIP extraction (via fflate) with a streaming tar.gz pipeline using Node.js built-in zlib and the tar package. The change addresses real-world failures on large repositories and significantly simplifies the code (~340 → ~190 lines).

Overall Assessment

Approve — The changes are clean, well-structured, and the streaming approach is the right architectural choice. A few minor observations below.

Observations

1. Dead code: getArchiveFilename is no longer used in source code

getArchiveFilename was removed from the import in gitHubArchive.ts since the streaming approach does not need temp files. However, the function still exists in gitHubArchiveApi.ts:59 with a comment "Note: This is used as a fallback identifier; streaming extraction does not require temp files."

It is only referenced by tests now. Consider removing it entirely or documenting why it should be kept (e.g., for MCP server use or other callers). Currently it is dead code with a misleading "fallback identifier" justification.

2. Target directory creation responsibility shifted to caller

The old code had await deps.fs.mkdir(targetDirectory, { recursive: true }) as a defensive measure. This was removed, and the caller (remoteAction.ts:32) creates the directory via fs.mkdtemp. This works correctly for the current call site, but the function contract is now less defensive — if a future caller forgets to create the directory, tar.extract({ cwd: targetDirectory }) will fail.

This is acceptable since the function is internal, but worth noting in case the API becomes public in the future.

3. Timeout applies only to fetch, not to the full pipeline

The AbortController timeout (setTimeout(() => controller.abort(), timeout)) is set before fetch and cleared in the finally block. However, the timeout only triggers an abort on the fetch signal — it does not abort the streaming pipeline if extraction takes too long after the HTTP response headers arrive. For very large repos with slow extraction, the timeout may not behave as users expect.

The old code had the same limitation (timeout only covered the download phase), so this is not a regression. Just something to be aware of for future improvements.

4. Security: path traversal protection delegated to tar package

The old code had explicit path traversal protection (sanitizing ../, checking path.isAbsolute, verifying paths stay within targetDirectory). The new code relies on the tar package built-in protections.

The tar package (v7.x by @isaacs) does handle path traversal prevention by default, so this is a valid approach and the corresponding tests were correctly removed. Good call citing this in the PR description.

5. Test quality improvements

The test refactoring is clean:

  • createMockResponse helper reduces duplication nicely
  • The timeout test now properly respects AbortSignal, which is a genuine improvement over the old test that relied on timing
  • Mock setup is simpler and more focused on the actual behavior being tested

One minor note: the "should handle extraction error" test at line 176-183 tests that a generic Error from the pipeline gets wrapped in a RepomixError, which is good. But the test at line 185-195 ("should not retry on extraction error") mocks the pipeline to throw a RepomixError directly — in practice, the pipeline would throw a regular Error that gets caught and re-wrapped as RepomixError in the downloadAndExtractArchive function try/catch. The test works because the outer retry logic checks for RepomixError with "Failed to extract" in the message, but the mock bypasses the actual wrapping logic. This is fine for a unit test but worth understanding.

Summary

Aspect Assessment
Code quality Clean simplification, good use of Node.js streaming primitives
Security Path traversal handled by tar package — appropriate delegation
Performance Streaming pipeline eliminates memory constraints — major improvement
Test coverage Adequate; removed tests are correctly obsolete
Dependencies fflate removed, tar added — net improvement for this use case

No blocking issues. Nice refactoring.


Reviewed by Claude

@yamadashy yamadashy changed the title refactor(core): Replace ZIP archive with streaming tar.gz extraction perf(core): Replace ZIP archive with streaming tar.gz extraction Feb 14, 2026
@yamadashy yamadashy force-pushed the refactor/replace-zip-with-tar-gz-streaming branch from d34764b to 452e087 Compare February 16, 2026 13:30
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Feb 16, 2026

Code Review — Replace ZIP archive with streaming tar.gz extraction (updated)

This is a well-motivated refactoring that replaces the in-memory ZIP extraction (via fflate) with a streaming tar.gz pipeline using Node.js built-in zlib and the tar package. The change addresses real-world failures on large repositories and significantly simplifies the code (~340 → ~190 lines).

Overall Assessment

Approve — Clean, well-structured changes. The streaming approach is the right choice. A few observations below (none blocking).

Observations

1. extractStream not destroyed in finally block

In src/core/git/gitHubArchive.ts:183-188, streams are explicitly destroyed for Bun compatibility:

nodeStream.destroy();
progressStream.destroy();
gunzipStream.destroy();

However, extractStream (the tar extract writable stream) is not destroyed. If the concern is that Bun's pipeline() may not fully clean up streams, this stream should also be destroyed for consistency. It holds file handles to the target directory.

2. Dead code: getArchiveFilename is no longer used in source

getArchiveFilename was removed from the import in gitHubArchive.ts since the streaming approach does not need temp files. However, the function still exists in gitHubArchiveApi.ts:59 and is only referenced by tests. Consider removing it entirely or documenting a concrete use case. The current "fallback identifier" comment doesn't justify keeping it.

3. Target directory creation responsibility shifted

The old code had await deps.fs.mkdir(targetDirectory, { recursive: true }) as a defensive measure. This was removed. The caller (remoteAction.ts:32) creates the directory via fs.mkdtemp. This is fine since the function is internal, but tar.extract({ cwd: ... }) will fail with an unhelpful error if the directory doesn't exist. Worth considering a guard or documenting the precondition.

4. Bun workaround in processConcurrency.ts is tangential but reasonable

The Bun worker_threadschild_process fallback in src/shared/processConcurrency.ts:70-73 is a reasonable workaround. The comment explains the rationale. It's only loosely related to the tar.gz change (both address Bun compatibility), which makes the PR scope slightly broader than the title implies, but it's small enough to be acceptable here.

5. "should not retry on extraction error" test bypasses wrapping logic

In tests/core/git/gitHubArchive.test.ts:187, the test mocks pipeline to throw a RepomixError directly. In the actual code, pipeline throws a regular Error, which is then caught and re-wrapped as RepomixError in the try/catch at line 181-182 of gitHubArchive.ts. The test works because the outer retry logic checks for RepomixError with "Failed to extract" in the message, but it bypasses the wrapping logic. This is minor — just worth understanding.

Summary

Aspect Assessment
Code quality Clean simplification, good use of Node.js streaming primitives
Security Path traversal handled by tar package (v7.x) — appropriate delegation
Performance Streaming pipeline eliminates memory constraints — major improvement
Test coverage Adequate; Codecov shows all modified lines covered
Dependencies fflate removed, tar added — net improvement for this use case

No blocking issues.


Reviewed by Claude

…tion

The previous ZIP-based archive download used fflate's in-memory extraction,
which failed on large repositories (e.g. facebook/react) due to memory
constraints and ZIP64 limitations.

Switch to tar.gz format with Node.js built-in zlib + tar package, enabling
a full streaming pipeline (HTTP response -> gunzip -> tar extract -> disk)
with no temporary files and constant memory usage regardless of repo size.

Key changes:
- Replace fflate with tar package for archive extraction
- Change archive URLs from .zip to .tar.gz
- Use streaming pipeline instead of download-then-extract
- Leverage tar's built-in strip and path traversal protection
- Explicitly destroy streams after pipeline for Bun compatibility
- Use child_process runtime under Bun to avoid worker_threads hang
@yamadashy yamadashy force-pushed the refactor/replace-zip-with-tar-gz-streaming branch from 452e087 to ef194b8 Compare February 17, 2026 15:22
devin-ai-integration[bot]

This comment was marked as resolved.

The workaround that forced child_process runtime under Bun was added
to prevent hangs caused by fileCollect worker threads. Since fileCollect
has been migrated to a promise pool on the main thread, the hang no
longer occurs and this workaround is unnecessary.
The streaming tar.gz extraction no longer uses temporary files,
making this filename generation function unnecessary.
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Feb 18, 2026

Code Review — Replace ZIP archive with streaming tar.gz extraction (incremental)

Note: Two prior Claude reviews exist on this PR. This review provides only incremental feedback on the current state of the code.

Overall Assessment

Approve — This is a well-executed refactoring. The streaming tar.gz approach is the right architectural decision, and the implementation is clean.

Corrections to Prior Reviews

The two earlier Claude reviews mentioned getArchiveFilename as dead code still present in gitHubArchiveApi.ts. Looking at the current state of the PR, this function has been fully removed from both source and tests. That finding is no longer applicable.

The earlier review also mentioned a Bun workaround in processConcurrency.ts — this is not part of this PR's diff.

One Remaining Actionable Item

extractStream not destroyed in finally block

At src/core/git/gitHubArchive.ts:183-188, three streams are explicitly destroyed for Bun compatibility, but extractStream (the tar extract writable stream) is missing:

// Currently:
nodeStream.destroy();
progressStream.destroy();
gunzipStream.destroy();

// Consider adding:
extractStream.destroy();

If the explicit destroy calls exist specifically because Bun's pipeline() may not fully clean up, then all four streams in the pipeline should be destroyed for consistency. The tar extract stream holds file descriptors to the target directory and is the most important one to clean up.

This is a minor issue — pipeline() on Node.js should handle cleanup correctly, and the explicit destroys are primarily a Bun safety net.

Summary

The PR is in good shape. The code is cleaner, the streaming approach eliminates the memory bottleneck, and tar v7's built-in path traversal protection is a reasonable delegation of security concerns. All modified lines have test coverage per Codecov. The one remaining item (destroying extractStream) is minor but worth addressing for consistency with the stated cleanup rationale.


Reviewed by Claude

devin-ai-integration[bot]

This comment was marked as resolved.

The pipeline catch block was wrapping all errors with 'Failed to extract'
prefix, causing the retry logic to treat transient network errors
(ECONNRESET, ETIMEDOUT) as non-retryable extraction errors.

Remove the catch wrapper so errors propagate as-is, allowing the
retry loop to handle network failures correctly.
devin-ai-integration[bot]

This comment was marked as resolved.

With the streaming pipeline, errors propagate as native Error objects
rather than RepomixError, so the isExtractionError check was always
false. Retrying extraction errors is acceptable since the retry loop
is bounded to 3 attempts.
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Feb 23, 2026

Code Review — Replace ZIP archive with streaming tar.gz extraction (final review)

Note: Three prior Claude reviews exist on this PR. This review provides only NEW incremental feedback on the current state of the code.

Overall Assessment

Approve — The code is clean, well-tested, and the streaming approach is architecturally sound. One minor item from prior reviews remains unaddressed.

Still Open from Prior Review

extractStream not destroyed in finally blocksrc/core/git/gitHubArchive.ts:175-181

This was raised in the second and third Claude reviews and remains unaddressed. Three of the four pipeline streams are explicitly destroyed for Bun compatibility, but extractStream is omitted:

// Current (line 178-180):
nodeStream.destroy();
progressStream.destroy();
gunzipStream.destroy();
// extractStream is not destroyed

If the explicit destroy calls exist because Bun's pipeline() may not fully clean up (as the comment states), then extractStream should also be destroyed. It holds file descriptors to the target directory and is arguably the most important stream to clean up. Consider adding extractStream.destroy() for consistency.

This is minor — Node.js's pipeline() handles cleanup correctly, so this is primarily a Bun safety net.

New Observation

Extraction errors are now retried (behavioral change)src/core/git/gitHubArchive.ts:74-92

The old code explicitly skipped retries on extraction errors:

// Old code (removed in this PR):
const isExtractionError = lastError instanceof RepomixError && lastError.message.includes('Failed to extract');
if (isExtractionError) {
  throw lastError;
}

This guard was removed. With the streaming approach, download and extraction happen in a single pipeline() call, making it harder to distinguish network errors from extraction errors. As a result, a corrupt archive (e.g., truncated gzip) will now be retried, which is wasteful since the same archive content will likely fail again.

In practice this is acceptable because: (a) GitHub archives are rarely corrupt, (b) the retry count is small (default 3), and (c) the streaming approach makes it genuinely hard to distinguish the error source. Just worth documenting as an intentional behavioral change.

Summary

Aspect Status
Prior feedback addressed getArchiveFilename removed ✅, processConcurrency.ts out of scope (corrected) ✅
Prior feedback still open extractStream.destroy() missing — minor
New findings Extraction error retry behavior changed — acceptable
Code quality Clean, well-structured
Test coverage Adequate for the new streaming approach

No blocking issues.


Reviewed by Claude

@yamadashy yamadashy merged commit c5fa59d into main Feb 23, 2026
54 checks passed
@yamadashy yamadashy deleted the refactor/replace-zip-with-tar-gz-streaming branch February 23, 2026 14:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant