-
Notifications
You must be signed in to change notification settings - Fork 382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NULLs in tetragon data #193
Comments
I use KinD and haven't encounter this problem. But I guess the problems origin from following codes The podInfo is queried in tetragon/pkg/process/process.go Line 166 in 77e32bf
And the node name is queried in tetragon/pkg/reader/node/node.go Line 13 in 77e32bf
They are all done in golang side |
BTW, the log is too hard to read, lol. Here is the formatted version. The
|
One of the more severe occurrences of this behavior (observed on k3s) has only timestamp data intact, looking like this:
|
Thank you for the report! Could you please add the logs from the tetragon container? |
You're welcome! Here is a tetragon log sample from a k3s cluster that encountered this behavior:
|
Thanks!
So it seems that we need to:
|
When tetragon is installed via helm as instructed, some fields are populated by null strings (example in data below) instead of the expected data:
It isn't just tetragon container events that have these nulls like the example, other container's tetragon trace data has also been observed to be impacted.
This behavior hasn't been appearing immediately when tetragon is installed, it seems to take some time before it starts happening, perhaps a few hours or a day, then it continues to occur.
I have observed this behavior across multiple builds of k3s and microk8s clusters but only when tetragon is deployed. We can also see related data in the microk8s kubelite daemon log:
This behavior does not occur when tetragon is not running/installed into the clusters.
Tests for this I have done so far have been with both k3s and microk8s clusters.
Cluster nodes in the tests have been running OpenSUSE Leap 15.4, Ubuntu 22 Server, Ubuntu 22 Desktop, (kernels 5.15.0-39-generic and 5.14.21-150400.22-default at the moment).
Currently I have a microk8s cluster running with this happening.
Let me know if there is any other data that would be helpful!
The text was updated successfully, but these errors were encountered: