feat(robot-server,system-server): Return server timing metrics in HTTP responses#17970
Merged
sfoster1 merged 5 commits intochore_release-8.4.0from Apr 3, 2025
Merged
feat(robot-server,system-server): Return server timing metrics in HTTP responses#17970sfoster1 merged 5 commits intochore_release-8.4.0from
sfoster1 merged 5 commits intochore_release-8.4.0from
Conversation
Contributor
Author
|
I'm out until next week. Feel free to merge in my absence. |
y3rsh
added a commit
that referenced
this pull request
Apr 9, 2025
…y-path * chore_release-8.4.0: (30 commits) fix(api): make error message clearer for lld issues (#18005) chore(locize): sync for translations needed (#18009) chore(locize): sync translations (#18009) docs(api): API reference entries for liquid class methods (#17887) refactor(api): change the names of liquid classes based transfers (#18006) feat(app, labware-library): add evotip definition assets (#18007) fix(api, shared-data): Flex Stacker engine command optional fields (#17989) fix(shared-data): ethanol aspirate position reference (#17991) fix(app): Labware setup UI fixes (#17987) refactor(app): adjust protocol setup offsets table header (#17985) fix(app): do not show post run drop tip prompt if just handled in Error Recovery (#17981) fix(app): fix LPC disabled reasons not including fixture mismatch (#17979) refactor(app): adjust width on "calibrate now" button (#17978) fix(app): fix applying offsets implicitly when navigating on the desktop app (#17967) feat(robot-server): Populate `locationSequence` on old runs and make it faster to filter out deleted offsets (#17946) feat(app): add inline notification when setting default offsets with a 96ch (#17977) fix(app): ER tip selection crashes when trying to get labware def (#17975) feat(robot-server,system-server): Return server timing metrics in HTTP responses (#17970) fix(app): fix accumulating offsets on run record (#17969) fix(app): Fix local state issues when "resetting to default" in LPC (#17965) ...
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Overview
When investigating server-side performance concerns, it's helpful to isolate how much time is spent actually waiting for the server's Python to chug away, versus how much time is spent in the network, or in nginx or whatever.
This adds a FastAPI middleware to attach, to every HTTP response, a header describing how much time our Python spent processing the request. This is based on the standard Server-Timing header, so it automatically shows up in Chrome devtools.
Exactly what it measures is a little nuanced. It's an ASGI middleware, so it ought to include FastAPI overhead. But the FastAPI docs note that dependency teardown runs after the middleware, so 🤷.
Goes towards EXEC-1412.
Test Plan and Hands on Testing
Changelog
server-utils's dev environment because something was broken in the old dependencies and I couldn't use aTestClient.Review requests
OK with the vagueness of what's actually being measured?
Risk assessment
Low.