Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose kernel ringbuffer errors in metrics #2839

Merged
merged 2 commits into from
Aug 27, 2024
Merged

Conversation

lambdanis
Copy link
Contributor

@lambdanis lambdanis commented Aug 24, 2024

There are two metrics counting events lost in the ringbuffer:

  • tetragon_missed_events_total, collected in BPF based on perf_event_output
    error
  • tetragon_ringbuf_perf_event_lost_total, collected in observer based on
    Record.LostSamples from cilium/ebpf

When testing, I saw the former being higher than the latter. This might mean
there are failed writes to the ringbuffer that are not lost events seen by
Record.LostSamples. To investigate such issues, I'm adding error label to
tetragon_missed_events_total, representing kernel error returned by
perf_event_output.

According to @olsajiri EBUSY and ENOSPC would be the only we could really hit,
the rest is most likely due config error. So let's start with counting these
two, and aggregating other errors as "unknown". We can always split out more
errors in the future if needed.

@lambdanis lambdanis added area/metrics Related to prometheus metrics release-note/minor This PR introduces a minor user-visible change labels Aug 24, 2024
Copy link

netlify bot commented Aug 24, 2024

Deploy Preview for tetragon ready!

Name Link
🔨 Latest commit 694727f
🔍 Latest deploy log https://app.netlify.com/sites/tetragon/deploys/66c9b3ddd43f650008fc3938
😎 Deploy Preview https://deploy-preview-2839--tetragon.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

There are two metrics counting events lost in the ringbuffer:
* tetragon_missed_events_total, collected in BPF based on perf_event_output
  error
* tetragon_ringbuf_perf_event_lost_total, collected in observer based on
  Record.LostSamples from cilium/ebpf

When testing, I saw the former being higher than the latter. This might mean
there are failed writes to the ringbuffer that are not lost events seen by
Record.LostSamples. To investigate such issues, I'm adding error label to
tetragon_missed_events_total, representing kernel error returned by
perf_event_output.

According to @olsajiri EBUSY and ENOSPC would be the only we could really hit,
the rest is most likely due config error. So let's start with counting these
two, and aggregating other errors as "unknown". We can always split out more
errors in the future if needed.

Signed-off-by: Anna Kapuscinska <[email protected]>
@lambdanis lambdanis force-pushed the pr/lambdanis/missed-metric branch from 607c0bd to c270528 Compare August 24, 2024 09:47
@lambdanis lambdanis marked this pull request as ready for review August 24, 2024 09:47
@lambdanis lambdanis requested review from mtardy and a team as code owners August 24, 2024 09:47
There are a few places where events be "missed", so let's make it clear in the
metric name that's it's counting kernel misses.

Signed-off-by: Anna Kapuscinska <[email protected]>
Copy link
Contributor

@olsajiri olsajiri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

#define SENT_FAILED_UNKNOWN 0 // unknown error
#define SENT_FAILED_EBUSY 1 // EBUSY
#define SENT_FAILED_ENOSPC 2 // ENOSPC
#define SENT_FAILED_MAX 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit, extra tab screwing the diff in terminal

struct kernel_stats {
__u64 sent_failed[256];
__u64 sent_failed[256][SENT_FAILED_MAX];
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a note.. I think it's ok because it's still small map, but it is per cpu and this change multiplies the memory usage by 3

@olsajiri olsajiri requested a review from tpapagian August 26, 2024 10:25
@olsajiri
Copy link
Contributor

adding @tpapagian because I think he added those metrics originally

@lambdanis lambdanis merged commit 17c5346 into main Aug 27, 2024
51 checks passed
@lambdanis lambdanis deleted the pr/lambdanis/missed-metric branch August 27, 2024 08:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/metrics Related to prometheus metrics release-note/minor This PR introduces a minor user-visible change
Projects
Development

Successfully merging this pull request may close these issues.

3 participants