Skip to content

Conversation

@JoshVanL
Copy link
Contributor

No description provided.

@JoshVanL JoshVanL force-pushed the 20251028-RS-workflow-list.md branch from 11b5625 to 4221a81 Compare October 28, 2025 17:27
optional uint32 continuationToken = 1;

// pageSize is the maximum number of instances to return for this page. If
// not given, all instances will be attempted to be returned.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be safer to set a maximum allowed value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would you consider safe in this context?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

100? completely arbitrary, but seems large enough to be useful and small enough to not stress the system.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about 1024? 😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think ~1000 items are manageable? It feels like a bit too much for me. Github API normally limits to 100 per page, with a default of 30. But their payloads are big. What do you suggest? No limit and just return them all?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should think so- we are only returning the key strings, no the value payloads.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case I think we can keep it high, fine with the 1000 if you feel it's fine, but I'd enforce a limit just to protect us from situations where there are just too many keys to return.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it not be the case that the user should enforce that limit? They have the controls with the page size and continuation token.

Comment on lines 46 to 49
message ListInstancesResponse {
// instanceIds is the list of instance IDs returned.
repeated string instanceIds = 1;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be useful to include something about the instance in this message? Maybe the name of the workflow at least?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, shouldn't this return a continuationId for the pagination?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we store workflows in a key value store, with the instance ID being in the key, we only have access to that in a simple table scan without decoding each key. The users of this API will do follow up GetInstanceIDHistory or metadata lookups for each instance ID.

We don't want to keep any inter-request state on the server side, so it is up to the client to calculate this- it being the continuationToken+pageSize. If len(instanceIds) < pageSize then the client knows it has reached the end of the full list.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After think about this over the weekend and doing some implementations, I agree it is correct that we return a continuation token on the response! 🙂

Copy link
Contributor

@cicoyle cicoyle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm mostly good with this proposal and think it's a great value add. Once workflow versioning is supported, will we need any tweaking to include the workflow version in the list or history views?

Also, thoughts on changing instanceIds -> instanceIDs && instanceId -> instanceID?

@olitomlinson
Copy link

olitomlinson commented Nov 5, 2025

I would prefer we take the approach of using the Event Streaming proposal over building workflow list capabilities (driven by my initial Workflows lifecycle event feature request)

I appreciate this might be an effective solution for small workloads (A couple of hundred workflow instances) but, paging an unbounded set of Workflow instances is going to be a nightmare at scale, Continuation Tokens are okay, but they're nothing more than the encoding of a page size, and offset to move through a set.

But we know that Wokflow statuses change over time so forward-only traversal of a set via paging is going to miss workflows where the status has changed. At this point you have to start going down the Change Data Capture/Feed route, which is a whole 'nother thing! Look at Cosmosdbs Change Data Feed for example.

Far simpler IMO to move forward with the Event Streaming proposal, which gives people an evented model for handling changes, rather than the proposed polling/list model. And can be used in many different ways to satisify many different use-cases (as described in my Workflows lifecycle event feature request)

Btw is not just running and completed workflows that folks are interested in discovering, it could be workflows that are in any of the available states.


I appreciate that both this proposal and the Event Streaming proposal can co-exist and be delivered as two separate things without any problems. I just feel we get more mileage out of the Event Streaming, hence my preference. That said, this proposal still gets my non-binding vote of approval, despite its inherent complexities at scale.

JoshVanL added a commit to JoshVanL/durabletask-go that referenced this pull request Nov 5, 2025
Implements these new methods implementing dapr/proposals#93

Signed-off-by: joshvanl <[email protected]>
@JoshVanL
Copy link
Contributor Author

JoshVanL commented Nov 5, 2025

I would prefer we take the approach of using the Event Streaming proposal over building workflow list capabilities (driven by my initial Workflows lifecycle event feature request)

I appreciate this might be an effective solution for small workloads (A couple of hundred workflow instances) but, paging an unbounded set of Workflow instances is going to be a nightmare at scale, Continuation Tokens are okay, but they're nothing more than the encoding of a page size, and offset to move through a set.

But we know that Wokflow statuses change over time so forward-only traversal of a set via paging is going to miss workflows where the status has changed. At this point you have to start going down the Change Data Capture/Feed route, which is a whole 'nother thing! Look at Cosmosdbs Change Data Feed for example.

Far simpler IMO to move forward with the Event Streaming proposal, which gives people an evented model for handling changes, rather than the proposed polling/list model. And can be used in many different ways to satisify many different use-cases (as described in my Workflows lifecycle event feature request)

Btw is not just running and completed workflows that folks are interested in discovering, it could be workflows that are in any of the available states.

I appreciate that both this proposal and the Event Streaming proposal can co-exist and be delivered as two separate things without any problems. I just feel we get more mileage out of the Event Streaming, hence my preference. That said, this proposal still gets my non-binding vote of approval, despite its inherent complexities at scale.

As you say, this proposal is in no way trying to supersede or replace the event streaming feature proposal.
This functionality is specifically being added to support the CLI commands dapr workflow list etc. without needing to give any db connection string to the CLI from the user. Adding event streaming would not solve this problem.

JoshVanL added a commit to JoshVanL/dapr that referenced this pull request Nov 5, 2025
Today, there is no ways of discovering the list of workflow instances that are currently running or have completed in the past without using external storage queries.
The Dapr CLI [introduced list and workflow history commands](dapr/cli#1560) to get information about running and completed workflows, however these commands rely on direct queries to the underlying storage provider.
By introducing this functionality into the durabletask framework itself, these commands need only talk to Daprd, removing the requirement for direct access to the storage provider as well as authentication.
Daprd can make these queries itself, and use the Actor State Store component to access the underlying storage.

Implements the new durabletask APIs according to dapr/proposals#93

Signed-off-by: joshvanl <[email protected]>
@cicoyle
Copy link
Contributor

cicoyle commented Nov 17, 2025

+1 binding

JoshVanL added a commit to JoshVanL/dapr that referenced this pull request Nov 18, 2025
Today, there is no ways of discovering the list of workflow instances that are currently running or have completed in the past without using external storage queries.
The Dapr CLI [introduced list and workflow history commands](dapr/cli#1560) to get information about running and completed workflows, however these commands rely on direct queries to the underlying storage provider.
By introducing this functionality into the durabletask framework itself, these commands need only talk to Daprd, removing the requirement for direct access to the storage provider as well as authentication.
Daprd can make these queries itself, and use the Actor State Store component to access the underlying storage.

Implements the new durabletask APIs according to dapr/proposals#93

Signed-off-by: joshvanl <[email protected]>
JoshVanL added a commit to JoshVanL/dapr that referenced this pull request Nov 18, 2025
Today, there is no ways of discovering the list of workflow instances that are currently running or have completed in the past without using external storage queries.
The Dapr CLI [introduced list and workflow history commands](dapr/cli#1560) to get information about running and completed workflows, however these commands rely on direct queries to the underlying storage provider.
By introducing this functionality into the durabletask framework itself, these commands need only talk to Daprd, removing the requirement for direct access to the storage provider as well as authentication.
Daprd can make these queries itself, and use the Actor State Store component to access the underlying storage.

Implements the new durabletask APIs according to dapr/proposals#93

Update components-contrib & durabletask-go to origin

Use shared const for ActorTypePrefix

Signed-off-by: joshvanl <[email protected]>
JoshVanL added a commit to dapr/dapr that referenced this pull request Nov 18, 2025
Today, there is no ways of discovering the list of workflow instances that are currently running or have completed in the past without using external storage queries.
The Dapr CLI [introduced list and workflow history commands](dapr/cli#1560) to get information about running and completed workflows, however these commands rely on direct queries to the underlying storage provider.
By introducing this functionality into the durabletask framework itself, these commands need only talk to Daprd, removing the requirement for direct access to the storage provider as well as authentication.
Daprd can make these queries itself, and use the Actor State Store component to access the underlying storage.

Implements the new durabletask APIs according to dapr/proposals#93

Update components-contrib & durabletask-go to origin

Use shared const for ActorTypePrefix

Signed-off-by: joshvanl <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants