Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: keep site upload batches under 98 mb #1392

Merged
merged 1 commit into from
Jul 4, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .changeset/strange-cameras-hammer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
"wrangler": patch
---

fix: keep site upload batches under 98 mb

The maximum _request_ size for a batch upload is 100 MB. We were previously calculating the upload key value to be under _100 MiB_. Further, with a few bytes here and there, the size of the request can exceed 100 MiB. So this fix calculate using MB instead of MiB, but also brings down our own limit to 98 MB so there's some wiggle room for uploads.

Fixes https://github.com/cloudflare/wrangler2/issues/1367
13 changes: 8 additions & 5 deletions packages/wrangler/src/__tests__/publish.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2480,13 +2480,16 @@ addEventListener('fetch', event => {});`

await runWrangler("publish");

// We expect this to be uploaded in 3 batches
// The first batch has 11 files (88mb)
// We expect this to be uploaded in 4 batches

// The first batch has 11 files
expect(requests[0].uploads.length).toEqual(11);
// The next batch has 5 files (93mb)
// The next batch has 5 files
expect(requests[1].uploads.length).toEqual(5);
// And the last one has 4 files (98mb)
expect(requests[2].uploads.length).toEqual(4);
// And the next one has 3 files
expect(requests[2].uploads.length).toEqual(3);
// And just 1 in the last batch
expect(requests[3].uploads.length).toEqual(1);

let assetIndex = 0;
for (const request of requests) {
Expand Down
10 changes: 5 additions & 5 deletions packages/wrangler/src/sites.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -138,9 +138,9 @@ export async function syncAssets(

const manifest: Record<string, string> = {};

// A batch of uploads where each bucket has to be less than 100mb
// A batch of uploads where each bucket has to be less than 98mb
const uploadBuckets: KeyValue[][] = [];
// The "live" bucket that we'll keep filling until it's just below 100mb
// The "live" bucket that we'll keep filling until it's just below 98mb
let uploadBucket: KeyValue[] = [];
// A size counter for the live bucket
let uploadBucketSize = 0;
Expand All @@ -166,7 +166,7 @@ export async function syncAssets(
const content = await readFile(absAssetFile, "base64");
await validateAssetSize(absAssetFile, assetFile);
// while KV accepts files that are 25 MiB **before** b64 encoding
// the overall bucket size must be below 100 MiB **after** b64 encoding
// the overall bucket size must be below 100 MB **after** b64 encoding
const assetSize = Buffer.from(content).length;
const assetKey = hashAsset(hasher, assetFile, content);
validateAssetKey(assetKey);
Expand All @@ -176,8 +176,8 @@ export async function syncAssets(
logger.log(`Uploading as ${assetKey}...`);

// Check if adding this asset to the bucket would
// push it over the 100 MiB limit KV bulk API limit
if (uploadBucketSize + assetSize > 100 * 1024 * 1024) {
// push it over the 98 MiB limit KV bulk API limit
if (uploadBucketSize + assetSize > 98 * 1000 * 1000) {
// If so, move the current bucket into the batch,
// and reset the counter/bucket
uploadBuckets.push(uploadBucket);
Expand Down