Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: ui build in one single http request #3020

Merged
merged 9 commits into from
Aug 2, 2024

Conversation

nicoloboschi
Copy link
Contributor

@nicoloboschi nicoloboschi commented Jul 29, 2024

Problem

Currently the backend that serves the frontend can't be scaled up to use multiple workers. The main limitation is that the playground builds the component with multiple requests, expecting to always hit the same backend. The current mitigation is to put a distributed cache between the backend (redis the only one supported). While this is doable, it adds a lot of complexity both from the coding and operational perspective.

Solution

This pull request introduces a new endpoint for building the flow in the playground in a single http request, removing the need for the state to be distributed across workers.
The main requirements for this endpoint are:

  1. Provide "updates" to the client after each node has been built.
  2. Being able to be stopped if the client decides so (e.g. infinite flow loops)

To fullfill this request, the endpoint serves the response in streaming-like fashion. (NDJSON format)
After each vertex is built a JSON containing the build details is sent to the client, that reflects the state in the UI.
If the client interrupts the connection, all the build tasks are stopped.

All build layers is now calculated server-side only, this improves testability and performance. For each layer, each vertex is built in parallel using asyncio.

The final result is that the build is faster (mainly due to removed network overhead); from user perspective sometimes is too fast that components seem to be skipped (they get the "in-progress" state for a couple of milliseconds). To improve the user experience the UI calculates the delta between the actual progress time, making each component to enter the "in-progress" state for at least 300 milliseconds. This gives to the user the progressive build feeling in all the cases.

Since the frontend now calls a different endpoint, it'd be a breaking change (e.g. you upgrade frontend first and then backend returns 404 since it doesn't know that endpoint yet). To overcome that, the old logic is still present and used as fallback. However it's recommended to run the same version for the frontend and backend.

One implementation detail is that axios doesn't support streaming responses and I had to use fetch , which doesn't pass through the common interceptors and could lead to unexpected error handling. (however they should be handled correctly)

Discarded alternatives

  1. using SSE instead of NDJSON streaming: there's a huge limitation in modern browser for EventSource using http 1.1. To make langflow more predictable, I've discarded this solution.
  2. Using a file-based cache in the backend. While this could have helped with multiple workers, it would't have had any effect in deployment with machine-isolated backends (e.g. Kubernetes pods)

Copy link
Contributor

Pull Request Validation Report

This comment is automatically generated by Conventional PR

Whitelist Report

Whitelist Active Result
Pull request is a draft and should be ignored
Pull request is made by a whitelisted user and should be ignored
Pull request is submitted by a bot and should be ignored
Pull request is submitted by administrators and should be ignored

Result

Pull request matches with one (or more) enabled whitelist criteria. Pull request validation is skipped.

Last Modified at 29 Jul 24 11:52 UTC

@github-actions github-actions bot added the enhancement New feature or request label Jul 29, 2024
Copy link

This pull request is automatically being deployed by Amplify Hosting (learn more).

Access this pull request here: https://pr-3020.dmtpw4p5recq1.amplifyapp.com

@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Jul 29, 2024
@nicoloboschi nicoloboschi force-pushed the new-flow-build branch 2 times, most recently from 4ef0d1c to 205fdd7 Compare July 31, 2024 12:25
@nicoloboschi nicoloboschi marked this pull request as ready for review July 31, 2024 12:25
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Jul 31, 2024
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Jul 31, 2024
@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. and removed size:XXL This PR changes 1000+ lines, ignoring generated files. labels Jul 31, 2024
@@ -116,6 +120,7 @@ const useFlowStore = create<FlowStoreType>((set, get) => ({
newFlowPool[nodeId].push(data);
}
get().setFlowPool(newFlowPool);
get().updateFlowPool(nodeId, data);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the lines 122 and 123 are doing the same thing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct. removed now

Copy link
Collaborator

@lucaseduoli lucaseduoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good from the Frontend's side, but I would wait for Gabriel's review of the backend.

# and return the same structure but only with the ids
components_count = len(graph.vertices)
vertices_to_run = list(graph.vertices_to_run.union(get_top_level_vertices(graph, graph.vertices_to_run)))
await chat_service.set_cache(str(flow_id), graph)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried testing the Sequential Tasks starter project and something seems to be happening to the OpenAI component. I think it is the due to caching. I am not sure we need caching anymore since even streaming can be done in this endpoint.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed the cache usage completely

@Cristhianzl
Copy link
Collaborator

hey @nicoloboschi,
I tried to run FE tests locally, It seems like is missing the session after building the flow:

Before:
image

After:
image

@nicoloboschi nicoloboschi force-pushed the new-flow-build branch 2 times, most recently from 6a817d1 to 75328f1 Compare August 1, 2024 16:35
@nicoloboschi
Copy link
Contributor Author

@Cristhianzl nice catch! This has been fixed now

@Cristhianzl
Copy link
Collaborator

Cristhianzl commented Aug 1, 2024

@nicoloboschi

We have a feature called "Freeze Path", when I'm trying to run a flow with this activate, It throws me an error:

image
image

Could you please check for us?

@nicoloboschi nicoloboschi added the lgtm This PR has been approved by a maintainer label Aug 2, 2024
@nicoloboschi
Copy link
Contributor Author

@Cristhianzl I've fixed it. I've also found and fixed a related bug in another PR - #3158

@Cristhianzl
Copy link
Collaborator

@nicoloboschi the frontend e2e tests passed (good job!)

Just one point, I tried to run backend tests and I've got a few errors. could you please run "make tests" and check for us?
thank you!

@ogabrielluiz ogabrielluiz merged commit f311a6d into langflow-ai:main Aug 2, 2024
51 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request lgtm This PR has been approved by a maintainer size:XL This PR changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants