-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Monitor" tab for service health metrics #2954
Comments
I love these mockups, and I can think of a couple of things to add after this is implemented. For instance, clicking on a part of the graph could lead people to the search results view for the relevant part they clicked. For instance, when clicking in the most recent part of the errors graph, the search results for "error=true" would be shown for the time window representing the area in the graph that the user clicked. Same for latency, returning the traces for that time window, sorted by latency, with the highest first. In any case, I would recommend taking a look at Kiali, to see what they've done and what we could replicate. I particularly love the flame graphs, which should give an idea of how normal the latencies are for a given service/endpoint. The images are from this blog post: Trace my mesh. Make sure to check the part 2 and part 3 too.
I asked the Kiali folks to chime in, I'm sure they have some experience with this. In any case, I would ask to keep one feature in mind: the ability to embed those graphs by other solutions. I'd argue that a key aspect of the Jaeger UI is the fact that portions of it can be embedded into other applications, such as Kiali and Grafana Tempo.
I would vote for the most conservative solution that pleases the biggest number of use cases. This is a new feature, we don't need to get it 100% right on our first try. I think sorting by the biggest impact would be the best solution.
Re-fetch.
Can't we do an infinite scroll? The UI would then run the same query, for the same time window, just with a different offset. |
@yurishkuro @albertteoh Could I take the UI tasks on? |
@th3M1ke yup, I've added you as an owner of the UI tasks. Thanks for your help! |
Hi @albertteoh , anything can need the folks to make this released faster? |
@th3M1ke / @yoave23 how are you guys going with the UI side of the Monitor tab? Are there any well-defined/self-contained tasks that @RyanSiu1995 could help with? I'll also need to complete the documentation but didn't want to work on that until the UI component is ready to avoid prematurely communicating the completion of this feature. |
Hi folks! We are about to finish. Covering features with tests. Hope will open PR by the end of this week |
Thank you! Hope that it can come alive soon. |
Hi! Any updates on the release of this feature? |
@th3M1ke is addressing feedback from jaegertracing/jaeger-ui#815, which should be the last major piece of work for this feature. |
May I ask why the demand for #2954 has not been online yet? |
@tianruyun We're still testing this feature; resulting in some identified bugs/improvements either in OpenTelemetry collector (data from which the Monitor tab depends on) or Jaeger:
Those are the last remaining tasks so far before we can make this feature available in Jaeger. I plan to find some time to work on documentation in parallel with the remaining two Jaeger tasks above. You're most welcome to provide contributions as well. 😄 |
Really excited for this. Is there a way to already give this a try on the latest release? |
@schickling there's demo you can run locally in: https://github.com/jaegertracing/jaeger/tree/main/docker-compose/monitor. |
All tasks complete, closing this issue. We can address any feedback/bugs as separate Issues/PRs. |
Waiting to use in production! When will this release? |
Next release planned for 6 April: https://github.com/jaegertracing/jaeger/blob/main/RELEASE.md#release-managers |
Hi @albertteoh it seems that Monitor Tab still not in the new release 1.33 https://github.com/jaegertracing/jaeger/releases/tag/v1.33.0 What obstacle did we met? |
@FingerLiu the main functionality for Monitor Tab was essentially completed in previous releases. This is why you couldn't see references to the Monitor Tab in the jaeger v1.33.0 release notes, which mainly emphasise changes to the Jaeger backend components. Most of the remaining changes needed for the Monitor Tab work to be considered "done" were documentation and other frontend bug enhancements/fixes, and these were both released together as part of the jaeger v1.33.0 release (NB: I've updated these release notes to include the pinned Jaeger UI version). |
Proposed sub-tasks
Jaeger-Query
Owners: @albertteoh
Jaeger-UI
Owners: @th3M1ke
Documentation
Owners: @albertteoh
Requirement - what kind of business use case are you trying to solve?
The main proposal is documented in: #2736.
The motivation is to help identify interesting traces (high qps, slow or erroneous) without knowing the service or operations up-front.
Use cases include:
Proposal - what do you suggest to solve the problem or improve the existing situation?
Add a new "Monitoring" tab situated after "Compare" containing service-level request rates, error rates, latency and impact (=
latency * request rate
to avoid "false positives" from low QPS endpoints with high latencies).The data will be sourced from jaeger-query's new metrics endpoints.
As the jaeger-query metrics endpoints require opt-in to be enabled, the Monitor tab will have a sensible empty state, perhaps a link to documentation on how to enable metrics querying capabilities.
Workflow
The screen will open to a per-service level set of metrics sorted, by default, on Impact. Columns are configurable by the user with other latency percentiles available, among others. A search box will be available to filter on service names.
The user need only supply the time period to fetch metrics on (similar to Find Traces), defaulting to a 1 hour lookback.
Note the user is not required to define the step size (the period between data points), at least in this iteration, to keep the user experience as simple as possible. Instead we propose to define the step size based on a sensible heuristic based on the query period and/or the width of the chart. For example:
< 30m
search period -> 15s step< 1h
search period -> 1m step, etc.There are two possible actions from here in this tab:
Service metrics page
If drilling down into the service-level metrics, the page will show a summary of the RED metrics at the top along with the per-operation equivalent metrics as with the per-service metrics above. Also similarly, there will be a search box to filter on operations, and the user has the option to "View all traces" for a given operation.
Search tab
The search tab will be the final stage in the workflow (except of course if going back to a previous state), which is pre-populated with the service and/or operation as well as the search period.
The search period will be sticky between each of these screens to maintain consistency in search results.
Demo
Courtesy of @Danafrid.
Screen.Recording.2021-04-14.at.11.52.58.mov
Any open questions to address
The text was updated successfully, but these errors were encountered: