You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running multiple workflows locally using wrangler dev, the local development server crashes with a fatal uncaught exception indicating a duplicate row insertion. However, the same code works fine when deployed to production.
The local development server should handle multiple workflows in the same way as the production environment, allowing developers to test multiple workflow configurations locally.
Steps to reproduce
Define multiple workflows in wrangler.toml, each with unique names, bindings, and class names
Create a worker that uses these workflows
Run wrangler dev to start local development server
The server crashes with the table insertion error
Note: This issue only occurs in local development. The same configuration works correctly when deployed to production.
Minimal reproduction attached
The error can be reproduced with this simple setup that defines two basic workflows:
import{WorkflowEntrypoint,WorkflowStep,WorkflowEvent}from'cloudflare:workers';typeEnv={MY_WORKFLOW: Workflow;MY_WORKFLOW2: Workflow;};typeParams={message: string;};exportclassMyWorkflowextendsWorkflowEntrypoint<Env,Params>{asyncrun(event: WorkflowEvent<Params>,step: WorkflowStep){awaitstep.do('simple step',async()=>{console.log(`Workflow 1: ${event.payload.message}`);return{success: true};});}}exportclassMyWorkflow2extendsWorkflowEntrypoint<Env,Params>{asyncrun(event: WorkflowEvent<Params>,step: WorkflowStep){awaitstep.do('simple step',async()=>{console.log(`Workflow 2: ${event.payload.message}`);return{success: true};});}}exportdefault{asyncfetch(req: Request,env: Env): Promise<Response>{consturl=newURL(req.url);if(url.pathname.startsWith('/favicon')){returnResponse.json({},{status: 404});}// Get the status of an existing instanceconstid=url.searchParams.get('instanceId');constworkflow=url.searchParams.get('workflow')||'1';if(id){constinstance=await(workflow==='1' ? env.MY_WORKFLOW : env.MY_WORKFLOW2).get(id);returnResponse.json({status: awaitinstance.status(),});}// Create new instances of both workflowsconst[instance1,instance2]=awaitPromise.all([env.MY_WORKFLOW.create({params: {message: 'Hello from Workflow 1'},}),env.MY_WORKFLOW2.create({params: {message: 'Hello from Workflow 2'},}),]);returnResponse.json({workflow1: {id: instance1.id,details: awaitinstance1.status(),},workflow2: {id: instance2.id,details: awaitinstance2.status(),},});},};
wrangler.toml:
name = "workflows-starter"main = "src/index.ts"compatibility_date = "2024-10-22"
[observability]
enabled = truehead_sampling_rate = 1
[[workflows]]
name = "workflows-starter"binding = "MY_WORKFLOW"class_name = "MyWorkflow"
[[workflows]]
name = "workflows-starter-2"binding = "MY_WORKFLOW2"class_name = "MyWorkflow2"
Which Cloudflare product(s) does this pertain to?
Wrangler, Workflows
What version(s) of the tool(s) are you using?
wrangler: 3.85.0, @cloudflare/workers-types: 4.20241106.0, typescript: 5.6.3
What version of Node are you using?
20.17.0
What operating system and version are you using?
Mac Sonoma 14.6.1
Describe the Bug
Observed behavior
When running multiple workflows locally using
wrangler dev
, the local development server crashes with a fatal uncaught exception indicating a duplicate row insertion. However, the same code works fine when deployed to production.Error in local development:
Fatal uncaught kj::Exception: kj/table.c++:49: failed: inserted row already exists in table stack: 1030ea047 100b305c3 100b304a7 1007fcf8f 100806cf3 1007d9c43 1007d7e9b 1007c97d7 1030dd0d7 1030dd3db 1030dbcff 1030dba97 1007b87bf 1964eb153 Copy
Expected behavior
The local development server should handle multiple workflows in the same way as the production environment, allowing developers to test multiple workflow configurations locally.
Steps to reproduce
wrangler dev
to start local development serverNote: This issue only occurs in local development. The same configuration works correctly when deployed to production.
Minimal reproduction attached
The error can be reproduced with this simple setup that defines two basic workflows:
wrangler.toml:
package.json dependencies:
{
"devDependencies": {
"@cloudflare/workers-types": "^4.20241106.0",
"typescript": "^5.6.3",
"wrangler": "^3.85.0"
}
}
Please provide a link to a minimal reproduction
No response
Please provide any relevant error logs
The text was updated successfully, but these errors were encountered: