-
Notifications
You must be signed in to change notification settings - Fork 39
Adds RPCs for workflow list and get history #93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: joshvanl <[email protected]>
11b5625 to
4221a81
Compare
| optional uint32 continuationToken = 1; | ||
|
|
||
| // pageSize is the maximum number of instances to return for this page. If | ||
| // not given, all instances will be attempted to be returned. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd be safer to set a maximum allowed value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would you consider safe in this context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
100? completely arbitrary, but seems large enough to be useful and small enough to not stress the system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about 1024? 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think ~1000 items are manageable? It feels like a bit too much for me. Github API normally limits to 100 per page, with a default of 30. But their payloads are big. What do you suggest? No limit and just return them all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should think so- we are only returning the key strings, no the value payloads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case I think we can keep it high, fine with the 1000 if you feel it's fine, but I'd enforce a limit just to protect us from situations where there are just too many keys to return.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it not be the case that the user should enforce that limit? They have the controls with the page size and continuation token.
20251028-RS-workflow-list.md
Outdated
| message ListInstancesResponse { | ||
| // instanceIds is the list of instance IDs returned. | ||
| repeated string instanceIds = 1; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be useful to include something about the instance in this message? Maybe the name of the workflow at least?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, shouldn't this return a continuationId for the pagination?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we store workflows in a key value store, with the instance ID being in the key, we only have access to that in a simple table scan without decoding each key. The users of this API will do follow up GetInstanceIDHistory or metadata lookups for each instance ID.
We don't want to keep any inter-request state on the server side, so it is up to the client to calculate this- it being the continuationToken+pageSize. If len(instanceIds) < pageSize then the client knows it has reached the end of the full list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After think about this over the weekend and doing some implementations, I agree it is correct that we return a continuation token on the response! 🙂
cicoyle
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm mostly good with this proposal and think it's a great value add. Once workflow versioning is supported, will we need any tweaking to include the workflow version in the list or history views?
Also, thoughts on changing instanceIds -> instanceIDs && instanceId -> instanceID?
|
I would prefer we take the approach of using the Event Streaming proposal over building workflow list capabilities (driven by my initial Workflows lifecycle event feature request) I appreciate this might be an effective solution for small workloads (A couple of hundred workflow instances) but, paging an unbounded set of Workflow instances is going to be a nightmare at scale, Continuation Tokens are okay, but they're nothing more than the encoding of a But we know that Wokflow statuses change over time so forward-only traversal of a set via paging is going to miss workflows where the status has changed. At this point you have to start going down the Change Data Capture/Feed route, which is a whole 'nother thing! Look at Cosmosdbs Change Data Feed for example. Far simpler IMO to move forward with the Event Streaming proposal, which gives people an evented model for handling changes, rather than the proposed polling/list model. And can be used in many different ways to satisify many different use-cases (as described in my Workflows lifecycle event feature request) Btw is not just I appreciate that both this proposal and the Event Streaming proposal can co-exist and be delivered as two separate things without any problems. I just feel we get more mileage out of the Event Streaming, hence my preference. That said, this proposal still gets my non-binding vote of approval, despite its inherent complexities at scale. |
Signed-off-by: joshvanl <[email protected]>
Implements these new methods implementing dapr/proposals#93 Signed-off-by: joshvanl <[email protected]>
As you say, this proposal is in no way trying to supersede or replace the event streaming feature proposal. |
Today, there is no ways of discovering the list of workflow instances that are currently running or have completed in the past without using external storage queries. The Dapr CLI [introduced list and workflow history commands](dapr/cli#1560) to get information about running and completed workflows, however these commands rely on direct queries to the underlying storage provider. By introducing this functionality into the durabletask framework itself, these commands need only talk to Daprd, removing the requirement for direct access to the storage provider as well as authentication. Daprd can make these queries itself, and use the Actor State Store component to access the underlying storage. Implements the new durabletask APIs according to dapr/proposals#93 Signed-off-by: joshvanl <[email protected]>
|
+1 binding |
Today, there is no ways of discovering the list of workflow instances that are currently running or have completed in the past without using external storage queries. The Dapr CLI [introduced list and workflow history commands](dapr/cli#1560) to get information about running and completed workflows, however these commands rely on direct queries to the underlying storage provider. By introducing this functionality into the durabletask framework itself, these commands need only talk to Daprd, removing the requirement for direct access to the storage provider as well as authentication. Daprd can make these queries itself, and use the Actor State Store component to access the underlying storage. Implements the new durabletask APIs according to dapr/proposals#93 Signed-off-by: joshvanl <[email protected]>
Today, there is no ways of discovering the list of workflow instances that are currently running or have completed in the past without using external storage queries. The Dapr CLI [introduced list and workflow history commands](dapr/cli#1560) to get information about running and completed workflows, however these commands rely on direct queries to the underlying storage provider. By introducing this functionality into the durabletask framework itself, these commands need only talk to Daprd, removing the requirement for direct access to the storage provider as well as authentication. Daprd can make these queries itself, and use the Actor State Store component to access the underlying storage. Implements the new durabletask APIs according to dapr/proposals#93 Update components-contrib & durabletask-go to origin Use shared const for ActorTypePrefix Signed-off-by: joshvanl <[email protected]>
Today, there is no ways of discovering the list of workflow instances that are currently running or have completed in the past without using external storage queries. The Dapr CLI [introduced list and workflow history commands](dapr/cli#1560) to get information about running and completed workflows, however these commands rely on direct queries to the underlying storage provider. By introducing this functionality into the durabletask framework itself, these commands need only talk to Daprd, removing the requirement for direct access to the storage provider as well as authentication. Daprd can make these queries itself, and use the Actor State Store component to access the underlying storage. Implements the new durabletask APIs according to dapr/proposals#93 Update components-contrib & durabletask-go to origin Use shared const for ActorTypePrefix Signed-off-by: joshvanl <[email protected]>
No description provided.