Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 BUG: [wrangler dev] Multiple workflows crash local development server #7186

Closed
ShahriarHD opened this issue Nov 7, 2024 · 1 comment
Labels
bug Something that isn't working

Comments

@ShahriarHD
Copy link

ShahriarHD commented Nov 7, 2024

Which Cloudflare product(s) does this pertain to?

Wrangler, Workflows

What version(s) of the tool(s) are you using?

wrangler: 3.85.0, @cloudflare/workers-types: 4.20241106.0, typescript: 5.6.3

What version of Node are you using?

20.17.0

What operating system and version are you using?

Mac Sonoma 14.6.1

Describe the Bug

Observed behavior

When running multiple workflows locally using wrangler dev, the local development server crashes with a fatal uncaught exception indicating a duplicate row insertion. However, the same code works fine when deployed to production.

Error in local development:
Fatal uncaught kj::Exception: kj/table.c++:49: failed: inserted row already exists in table stack: 1030ea047 100b305c3 100b304a7 1007fcf8f 100806cf3 1007d9c43 1007d7e9b 1007c97d7 1030dd0d7 1030dd3db 1030dbcff 1030dba97 1007b87bf 1964eb153 Copy

Expected behavior

The local development server should handle multiple workflows in the same way as the production environment, allowing developers to test multiple workflow configurations locally.

Steps to reproduce

  1. Define multiple workflows in wrangler.toml, each with unique names, bindings, and class names
  2. Create a worker that uses these workflows
  3. Run wrangler dev to start local development server
  4. The server crashes with the table insertion error

Note: This issue only occurs in local development. The same configuration works correctly when deployed to production.

Minimal reproduction attached

The error can be reproduced with this simple setup that defines two basic workflows:

import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent } from 'cloudflare:workers';

type Env = {
    MY_WORKFLOW: Workflow;
    MY_WORKFLOW2: Workflow;
};

type Params = {
    message: string;
};

export class MyWorkflow extends WorkflowEntrypoint<Env, Params> {
    async run(event: WorkflowEvent<Params>, step: WorkflowStep) {
        await step.do('simple step', async () => {
            console.log(`Workflow 1: ${event.payload.message}`);
            return { success: true };
        });
    }
}

export class MyWorkflow2 extends WorkflowEntrypoint<Env, Params> {
    async run(event: WorkflowEvent<Params>, step: WorkflowStep) {
        await step.do('simple step', async () => {
            console.log(`Workflow 2: ${event.payload.message}`);
            return { success: true };
        });
    }
}

export default {
    async fetch(req: Request, env: Env): Promise<Response> {
        const url = new URL(req.url);

        if (url.pathname.startsWith('/favicon')) {
            return Response.json({}, { status: 404 });
        }

        // Get the status of an existing instance
        const id = url.searchParams.get('instanceId');
        const workflow = url.searchParams.get('workflow') || '1';

        if (id) {
            const instance = await (workflow === '1' ? env.MY_WORKFLOW : env.MY_WORKFLOW2).get(id);
            return Response.json({
                status: await instance.status(),
            });
        }

        // Create new instances of both workflows
        const [instance1, instance2] = await Promise.all([
            env.MY_WORKFLOW.create({
                params: { message: 'Hello from Workflow 1' },
            }),
            env.MY_WORKFLOW2.create({
                params: { message: 'Hello from Workflow 2' },
            }),
        ]);

        return Response.json({
            workflow1: {
                id: instance1.id,
                details: await instance1.status(),
            },
            workflow2: {
                id: instance2.id,
                details: await instance2.status(),
            },
        });
    },
};

wrangler.toml:

name = "workflows-starter"
main = "src/index.ts"
compatibility_date = "2024-10-22"

[observability]
enabled = true
head_sampling_rate = 1

[[workflows]]
name = "workflows-starter"
binding = "MY_WORKFLOW"
class_name = "MyWorkflow"

[[workflows]]
name = "workflows-starter-2"
binding = "MY_WORKFLOW2"
class_name = "MyWorkflow2"

package.json dependencies:
{
"devDependencies": {
"@cloudflare/workers-types": "^4.20241106.0",
"typescript": "^5.6.3",
"wrangler": "^3.85.0"
}
}

Please provide a link to a minimal reproduction

No response

Please provide any relevant error logs

--- 2024-11-07T10:05:23.254Z log
�[2m⎔ Starting local server...�[22m
---

--- 2024-11-07T10:05:23.369Z debug
workerd/util/symbolizer.c++:101: warning: Not symbolizing stack traces because $LLVM_SYMBOLIZER is not set. To symbolize stack traces, set $LLVM_SYMBOLIZER to the location of the llvm-symbolizer binary. When running tests under bazel, use `--test_env=LLVM_SYMBOLIZER=<path>`.
*** Fatal uncaught kj::Exception: kj/table.c++:49: failed: inserted row already exists in table
stack: 105316047 102d5c5c3 102d5c4a7 102a28f8f 102a32cf3 102a05c43 102a03e9b 1029f57d7 1053090d7 1053093db 105307cff 105307a97 1029e47bf 1964eb153
---

--- 2024-11-07T10:05:23.370Z debug
Error in LocalRuntimeController: Error reloading local server
 MiniflareCoreError [ERR_RUNTIME_FAILURE]: The Workers runtime failed to start. There is likely additional logging output above.
    at #assembleAndUpdateConfig ($USRDIR/node_modules/.pnpm/[email protected]/node_modules/miniflare/dist/src/index.js:9980:13)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Mutex.runWith ($USRDIR/node_modules/.pnpm/[email protected]/node_modules/miniflare/dist/src/index.js:3632:16)
    at async #waitForReady ($USRDIR/node_modules/.pnpm/[email protected]/node_modules/miniflare/dist/src/index.js:10037:5)
    at async #onBundleComplete ($USRDIR/node_modules/.pnpm/[email protected]_@[email protected]/node_modules/wrangler/wrangler-dist/cli.js:216902:29)
    at async Mutex.runWith ($USRDIR/node_modules/.pnpm/[email protected]/node_modules/miniflare/dist/src/index.js:3632:16) {
  code: 'ERR_RUNTIME_FAILURE',
  cause: undefined
}
---

--- 2024-11-07T10:05:23.370Z debug
=> Error contextual data: undefined
---

--- 2024-11-07T10:05:23.370Z log

---

--- 2024-11-07T10:05:23.393Z error
�[31m✘ �[41;31m[�[41;97mERROR�[41;31m]�[0m �[1mThe Workers runtime failed to start. There is likely additional logging output above.�[0m
@ShahriarHD ShahriarHD added the bug Something that isn't working label Nov 7, 2024
@github-project-automation github-project-automation bot moved this to Untriaged in workers-sdk Nov 7, 2024
@Skye-31
Copy link
Contributor

Skye-31 commented Nov 7, 2024

Thanks for reporting - this is a duplicate of #7127, so I'll close this one.

@Skye-31 Skye-31 closed this as not planned Won't fix, can't repro, duplicate, stale Nov 7, 2024
@github-project-automation github-project-automation bot moved this from Untriaged to Done in workers-sdk Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something that isn't working
Projects
Status: Done
Development

No branches or pull requests

2 participants