Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Long Trace Track Event Missing #1017

Closed
lileyang2000 opened this issue Feb 18, 2025 · 1 comment
Closed

Long Trace Track Event Missing #1017

lileyang2000 opened this issue Feb 18, 2025 · 1 comment

Comments

@lileyang2000
Copy link

Hi, I am trying to use protobuf api to inject some counter events to perfetto trace in python. First I capture some long trace files (roughly 1GB size) under same use case, and then I applied protobuf based track event injection to these captured trace files in python , after all I found some of the injected traces appear to lose many injected counter track event due to sorter_push_event_out_of_order error, like shown below (trace_processor_shell.exe version is v49.0)

D:\Users\80397776>D:\Users\80397776\Downloads\windows-amd64\trace_processor_shell.exe -D F:\workspace_test5\perfetto_trace__llcc_1066000_ddr_4224000_policy0_1785600_policy6_2246400_20250110_125056_injected_20250218_164600.pfrace
[473.117] processor_shell.cc:1742 Trace loaded: 958.65 MB in 25.81s (37.1 MB/s)
Error stats for this trace:
name idx source value


stackprofile_empty_callstack [NULL] trace 1349604
sorter_push_event_out_of_order [NULL] trace 15792
[473.139] httpd.cc:102 [HTTP] Starting RPC server on localhost:9001
[473.141] httpd.cc:107 [HTTP] This server can be used by reloading https://ui.perfetto.dev and clicking on YES on the "Trace Processor native acceleration" dialog or through the Python API (see https://perfetto.dev/docs/analysis/trace-processor#python-api).

And only after I applied --full-sort flag to trace_processor_shell.exe , the injected trace turns out to be alright not missing track event anymore

D:\Users\80397776>D:\Users\80397776\Downloads\windows-amd64\trace_processor_shell.exe --full-sort -D F:\workspace_test5\perfetto_trace__llcc_1066000_ddr_4224000_policy0_1785600_policy6_2246400_20250110_125056_injected_20250218_164600.pfrace
[782.253] processor_shell.cc:1742 Trace loaded: 958.65 MB in 32.22s (29.8 MB/s)
Error stats for this trace:
name idx source value


stackprofile_empty_callstack [NULL] trace 1349604
[782.267] httpd.cc:102 [HTTP] Starting RPC server on localhost:9001
[782.269] httpd.cc:107 [HTTP] This server can be used by reloading https://ui.perfetto.dev and clicking on YES on the "Trace Processor native acceleration" dialog or through the Python API (see https://perfetto.dev/docs/analysis/trace-processor#python-api).

I have two questions:

  1. How does that sorter_push_event_out_of_order error happen? Why some of my traces with same injection method turns out to be alright , while others suffered sorter_push_event_out_of_order failure unless I applied --full-sort flags and is there a way to work around this error without applying --full-sort
    BTW my manualy injected track event is like below (following the way presented on https://perfetto.dev/docs/reference/synthetic-track-event)
    packet {
    trusted_packet_sequence_id: 5439
    track_descriptor {
    uuid: 16726494347724583161
    name: "Event 0x1000"
    process {
    pid: 161689
    process_name: "CTG Event 0x1000"
    }
    }
    }
    packet {
    trusted_packet_sequence_id: 5439
    track_descriptor {
    uuid: 10276124455049285176
    name: "0x1000 for cpu 0"
    parent_uuid: 16726494347724583161
    counter {
    type: COUNTER_UNSPECIFIED
    unit: UNIT_COUNT
    }
    }
    }
    packet {
    timestamp: 96548436110720
    trusted_packet_sequence_id: 5439
    track_event {
    type: TYPE_COUNTER
    track_uuid: 10276124455049285176
    name: "0x1000 for cpu 0"
    counter_value: 126728
    }
    timestamp_clock_id: 6
    }
    packet {
    timestamp: 96548452107335
    trusted_packet_sequence_id: 5439
    track_event {
    type: TYPE_COUNTER
    track_uuid: 10276124455049285176
    name: "0x1000 for cpu 0"
    counter_value: 116408
    }
    timestamp_clock_id: 6
    }
  2. Since I would like to use TraceProcessor Python API to load the injected trace and query the injected track event , it seems the query result is also missing some track event, so is there a way to apply "--full-sort" flag to TraceProcessor API or any work around way ?

Thank you for your time !

@LalitMaganti
Copy link
Collaborator

Let me start by saying: what you are doing is undefined behaviour and not official supported by us. You cannot just expect to concat a synthetically generated trace to a Perfetto trace and expect things to work. It's not supported and never will be. It may happen to work in some cases (e.g. short traces) and you can hack things with --full-sort but that's a hack and there are no guarantees that this will not break in the future.

How does that sorter_push_event_out_of_order error happen? Why some of my traces with same injection method turns out to be alright , while others suffered sorter_push_event_out_of_order failure unless I applied --full-sort flags and is there a way to work around this error without applying --full-sort

So in very technical terms, trace processor does incremental windowed sorting of the data in the trace to try to be more memory conscious. By doing this, we can avoid reading the whole trace into memory in one shot or do a very expensive O(nlogn) global sort. Instead we can just do some incremental sorting which is much more CPU and memory efficient. By concatenating events to the end of the trace, you are breaking this logic: you are emitting packets from close to the begining of the trace at the end of the trace and trace processor rejects this.

--full-sort is a big hack to force us to ignore all this clever logic and say "just don't bother, sort everything". As stated above, you are relying on some very undefined behaviour, just because it happens to work today, does not mean it will not break in the future.

The real solution for this is #1018. This will be the only official way to analuyse/visualize multiple "streams" of Perfetto trace data in a single session.

Since I would like to use TraceProcessor Python API to load the injected trace and query the injected track event , it seems the query result is also missing some track event, so is there a way to apply "--full-sort" flag to TraceProcessor API or any work around way ?

You can use the extra_args field of the TraceProcessorConfig and pass ["--full-sort"] there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants