Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Proposal: Support Timestamped Profiling Events #594

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

felixge
Copy link
Member

@felixge felixge commented Oct 31, 2024

🚧 WORK IN PROGRESS 🚧: This proposal is still in the early stages. I'm mostly looking for feedback from active participants in the profiling SIG before soliciting feedback from the wider community.

optional uint32 link_index = 3;

// The indices of the attributes associated with this event.
repeated uint32 attribute_indices = 4;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Attribute value units need to be specified somehow. In the current profiling format there is a sidecar field for that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean the AttributeUnit messages? Yeah, we can add that - I don't recall deleting it intentionally.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that one.

// wire is the number of clock cycles since the start of the associated Clock.
// For subsequent Event messages for the same Clock, the time value is the
// number of clock cycles since the previous message on the wire.
int64 time = 5;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe name this "timestamp" or some other name that makes it clearer that this is a point in time? Just "time" can be read as duration or a single event or aggregated duration.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm open to alternative naming. My worry with timestamp is that people would miss the delta encoding requirement. But that could happen with the current name as well I guess. I don't feel strongly about it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some ideas, with their pros and cons:

  • An explicit name like timestamp_or_delta, time_or_delta.
  • Oneof with timestamp and time_delta fields.

// The default value for the Event.value field. This is used when the value
// field is the same for all events of this type. For example CPU samples may
// typically represent the same value of 10ms.
uint64 default_value = 4;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels more like weight than "default value" to me.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took the term value from pprof's Sample.value field. weight would also work for me. I don't feel strongly about it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. No strong opinion, we can discuss later if anyone becomes more opinionated.


// Clock describes the clock format used by the EventType. Clocks are expected
// to be monotonic and include the time when a system is suspended.
message Clock {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of having the clock type to allow configurable and different resolution of timestamps in the same or different profiles, but one downside is - does it make simple and typical case more verbose? E.g. perhaps many collectors might end up just using Unix epoch microseconds with no attributes, but they will have to configure at least one clock.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's a tradeoff. I added the clocks for a few different reasons. If I had to put them in order, it would probably be:

  1. Leap Seconds: IMO observability systems should avoid unix epoch timestamps until leap seconds are abolished in 2035. At least I would imagine that they make it significantly less fun to debug incidents that are caused by leap seconds. But unfortunately this is a problem for all other OpenTelemetry signals already.
  2. Data volumes: Changing the resolution could save 1-2 bytes per event under varint encoding.
  3. TSC Clocks: Those can run at frequencies above 1 Ghz, so nanosecond resolution is not sufficient to store their values. OTOH, I can think of very few use cases that require sub-nanosecond resolution, especially for the stuff we target with OpenTelemetry.

That being said, the first problem is solvable without configurable clocks. If we decide 2 and 3 are not important enough, we should probably just require timestamps that originate from a non-leap unix epoch with nanosecond precision?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 and 3 could be solved by having some way of specifying the resolution of the timestamp per event type? E.g. nanos, micros, or millis.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably look if all other signals follow some specific approach with this. Being extra special is probably not worth it.

uint32 event_type_index = 1;

// The index of the Stack associated with this event.
uint32 stack_index = 2;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be really nice in addition to the stack if we could also include raw bytes of the event, along with some way to define the format of the Event (maybe in EventType). This goes back to the issue you and I discussed about store JFR / DotNet like events in this format someway.

Copy link
Member Author

@felixge felixge Nov 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@beaubelgrave that's worth considering. protobuf's stdlib already has an Any type that can be used to embed raw protobuf messages. Maybe we could do something similar here?

Also: Link to the previous discussion for others reading here: open-telemetry/opentelemetry-specification#3979

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that Any type would work well, since it gives us a type url we can use to indicate format like we were kind of circling around in the issue discussion. I'd just want to ensure that we can put the payload in this Event object and also have an optional format specified for the payload in the EventType object.

The scenario is we dump raw bytes in the Event objects and label them as "perf_event" or "tracepoint" when we get our data from Linux tracepoints. Then we'd have the tracepoint format data within the EventType struct. That way we could have small payloads in the Event with a larger format description in the EventType that has a 1:N relationship with Event (if I read the patches right).


This will enable the following use cases.

1. Richer analysis of On-CPU as well as Off-CPU data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On- and Off-CPU profiling are just two profiling use cases. How about making it less specific:

  1. Richer analysis of aggregated data, like On-CPU profiling, as well as event based data, like Off-CPU or memory allocation profiling.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Richer analysis of aggregated data, like On-CPU profiling

Not sure I follow. The use case enabled by this proposal is event based CPU profiling rather than aggregation. We already have aggregation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OTel ebpf profiler could aggregate on CPU profiling data. This is possible by setting Sample.value to a different value than 1 and provide a timestamp in Sample.timestamps_unix_nano for each Sample.value.
Currently, this aggregation is not done as it was rejected during the review process and so it is still a low hanging fruit for optimization.

opentelemetry/proto/profiles/v1development/eventprofile.md Outdated Show resolved Hide resolved
import "opentelemetry/proto/common/v1/common.proto";

// Profile holds a collection of events that represent a profile.
message Profile {
Copy link
Contributor

@florianl florianl Nov 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you consider having a single event_type for a message Profile?
For message Profile with a single event_type processing could look like this:

flowchart TD
    A[Shared Envelope] --> B{" "}
    B --> D[Profile for on-CPU profiling]
    B --> E[Profile for off-CPU profiling]
    B --> F[Profile for memory allocation profiling]
Loading

The string table, attribute table and other shared data then should be part of the Shared Envelope (I'm bad in naming things 🙏 ). This would allow faster handling of profiling data by processors based on event_type. With the suggested approach processors have to walk all Profile.events to handle a specific event_type.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an interesting idea. I can see how this could simplify things. I'll think about this a bit more!

{
event_type_index: 1 # thread_state
stack_index: 1 # main;foo;sleep
time: 5_000_000 # 5ms since previous event
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about the stall profiling mentioned in Brendan Gregg's blog on AI flame graphs (https://www.brendangregg.com/blog/2024-10-29/ai-flame-graphs.html)? If this kind of flame graph needs to be supported, there are multiple samples taken in the sleep state, which might cause recording confusion or data inconsistencies.

Copy link

@beaubelgrave beaubelgrave Nov 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, if we had a duration in addition to time, we could make off_cpu and stall events clearer. IE: Stall started at time X, lasted for Y. This gives a start/stop. This could be stored in an attribute, but feels somewhat wasteful to have to have an attribute per-duration value.

Is the value of the event always considered a duration? Often times I have events where the value is less time based (like memory allocations). But for off-CPU / stalls, the value is more duration based.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, sorry I missed that! Definitely covers what I was asking for, thanks!

strings: [
"", # index 0
"allocations", # index 1
"liveheap", # index 2
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's necessary to differentiate between virtual memory, physical memory, and situations like page faults. Typically, virtual memory consumed is more than physical memory because some memory regions might be zeroed and never referenced. Some might be 'copy-on-write'.

Copy link
Member Author

@felixge felixge Nov 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. This needs to be clarified, probably via semantic conventions. But the example given here is allocations profiled by the Go runtime. Those are always measured in terms of virtual memory. I wrote up more details on this here.

{
event_type_index: 0 # cpu_sample (can be omitted on the wire)
stack_index: 2 # main;foo
time: 300_000 # 0.3ms since previous event

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is time based on previous event? This implies we have events in perfect time order. Often during collection, especially off-CPU (cswitch) events, the start of the event is not in-order of when it completes. This would cause collectors to sort potentially large sets of data (cswitches can be massive).

Can we just have an absolute time option vs the delta time encoding scheme?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I imagine delta encoding is used to reduce payload size

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The purpose of delta encoding is to reduce payload sizes. time is an int64 right now, so negative values are possible. If we frequently need to deal with unordered events, we could define time as an sint64 which allows to encode negative numbers more efficiently. However, it comes a the expense of encoding positive numbers.

Anyway, I don't feel very strongly about delta encoding. I'm planning to do some benchmarks later to provide some evidence (or lack of evidence) that it's going to be beneficial.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

64-bits are already variable encoded, and if they were deltas off of a stable epoch (like profile start time). They would be quite small without deltas between each other. I get why we would want delta encoding, but it comes at a sorting cost that is likely really bad for off-cpu data sources (since they complete at random times compared to the start time).

Most of the current OpenTelemetry profiling format, including the concept of aggregating stack traces, is inherited from pprof.
However, given that we have [decided](https://github.com/open-telemetry/opentelemetry-proto/issues/567#issuecomment-2286565449) that strict pprof compatibility is not a goal for the OpenTelemetry, we are now free to design a more flexible and extensible format that can be used for a wider range of profiling use cases than pprof.

One use case that recently came up is the [collection of Off-CPU profiling data for the ebpf profiler](https://github.com/open-telemetry/opentelemetry-ebpf-profiler/pull/144). The attendees of the [Sep 5](https://docs.google.com/document/d/19UqPPPlGE83N37MhS93uRlxsP1_wGxQ33Qv6CDHaEp0/edit?tab=t.0#heading=h.lenvx4xd62c6) SIG meeting agreed that aggregated profiling data is not ideal for this use case, as it increases the difficulty to reason about the collected data and to correlate it with application behavior. This is especially true when it comes to the analysis of tail latency problems caused by outliers. So instead aggregation, it is much more useful to record this data as individual events with timestamps as well as additional context such as thread id, trace id and span id. This becomes even more powerful when On-CPU samples are also recorded with timestamps, as it allows users to identify spikes and stalls of CPU activity as well as Off-CPU examplars that explain the cause of the stalls and resulting latency (or lack of throughput).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aggregated profiling data is not ideal for this use case

FWIW, I used aggregated off-cpu profiles in the past (offcputime from bcctools) and I found them quite useful for figuring our where applications are waiting for IO.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack. I'm not trying to suggest that aggregated Off-CPU profiles are useless. But they tend to be dominated by idle threads, so it can be tricky to navigate them. Additionally they can't really be used to analyze tail latencies since all they show are averages.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, we use a mix of both. Overlaying active CPU over aggregate wait CPU shows where the CPU is trying to execute and getting delayed. From there we allow zooming into individual time sections involved in the aggregate to debug thread interactions, etc. We need both within the same profile to achieve the same thing we have in our private formats.

{
event_type_index: 0 # cpu_sample (can be omitted on the wire)
stack_index: 2 # main;foo
time: 300_000 # 0.3ms since previous event
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I imagine delta encoding is used to reduce payload size

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants