-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: ui build in one single http request #3020
Conversation
Pull Request Validation ReportThis comment is automatically generated by Conventional PR Whitelist Report
Result Pull request matches with one (or more) enabled whitelist criteria. Pull request validation is skipped. Last Modified at 29 Jul 24 11:52 UTC |
This pull request is automatically being deployed by Amplify Hosting (learn more). |
4ef0d1c
to
205fdd7
Compare
44e8548
to
bc79582
Compare
src/frontend/src/stores/flowStore.ts
Outdated
@@ -116,6 +120,7 @@ const useFlowStore = create<FlowStoreType>((set, get) => ({ | |||
newFlowPool[nodeId].push(data); | |||
} | |||
get().setFlowPool(newFlowPool); | |||
get().updateFlowPool(nodeId, data); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the lines 122 and 123 are doing the same thing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct. removed now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes look good from the Frontend's side, but I would wait for Gabriel's review of the backend.
# and return the same structure but only with the ids | ||
components_count = len(graph.vertices) | ||
vertices_to_run = list(graph.vertices_to_run.union(get_top_level_vertices(graph, graph.vertices_to_run))) | ||
await chat_service.set_cache(str(flow_id), graph) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried testing the Sequential Tasks starter project and something seems to be happening to the OpenAI component. I think it is the due to caching. I am not sure we need caching anymore since even streaming can be done in this endpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed the cache usage completely
6cdfb8f
to
b78719f
Compare
hey @nicoloboschi, |
6a817d1
to
75328f1
Compare
d777f07
to
3157947
Compare
@Cristhianzl nice catch! This has been fixed now |
We have a feature called "Freeze Path", when I'm trying to run a flow with this activate, It throws me an error: Could you please check for us? |
ec1c871
to
7b8f5ec
Compare
@Cristhianzl I've fixed it. I've also found and fixed a related bug in another PR - #3158 |
@nicoloboschi the frontend e2e tests passed (good job!) Just one point, I tried to run backend tests and I've got a few errors. could you please run "make tests" and check for us? |
Problem
Currently the backend that serves the frontend can't be scaled up to use multiple workers. The main limitation is that the playground builds the component with multiple requests, expecting to always hit the same backend. The current mitigation is to put a distributed cache between the backend (redis the only one supported). While this is doable, it adds a lot of complexity both from the coding and operational perspective.
Solution
This pull request introduces a new endpoint for building the flow in the playground in a single http request, removing the need for the state to be distributed across workers.
The main requirements for this endpoint are:
To fullfill this request, the endpoint serves the response in streaming-like fashion. (NDJSON format)
After each vertex is built a JSON containing the build details is sent to the client, that reflects the state in the UI.
If the client interrupts the connection, all the build tasks are stopped.
All build layers is now calculated server-side only, this improves testability and performance. For each layer, each vertex is built in parallel using asyncio.
The final result is that the build is faster (mainly due to removed network overhead); from user perspective sometimes is too fast that components seem to be skipped (they get the "in-progress" state for a couple of milliseconds). To improve the user experience the UI calculates the delta between the actual progress time, making each component to enter the "in-progress" state for at least 300 milliseconds. This gives to the user the progressive build feeling in all the cases.
Since the frontend now calls a different endpoint, it'd be a breaking change (e.g. you upgrade frontend first and then backend returns 404 since it doesn't know that endpoint yet). To overcome that, the old logic is still present and used as fallback. However it's recommended to run the same version for the frontend and backend.
One implementation detail is that axios doesn't support streaming responses and I had to use
fetch
, which doesn't pass through the common interceptors and could lead to unexpected error handling. (however they should be handled correctly)Discarded alternatives
EventSource
using http 1.1. To make langflow more predictable, I've discarded this solution.