-
-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uncompress_Block_LZO() error decompression failed in nffile.c #398
Comments
Hmm .. are you sure, your files have been written correctly? I would suspect, that you possibly were short of disk space and files could not have been written, which results in corrupt files. |
Data is not a problem: /dev/nvme0n1 7.0T 1.7T 4.9T 26% /data I was not completely clear with my ticket. I used 1.7.0.1 for a few weeks now and with the data i captured with that i get those errors but i would still get the results printed. Now since a few hours i use 1.7.1 and i don't see the errors anymore. Will monitor this for some longer period to see what happens. Some other weird thing i get is whenever i restart my nfcapd processes the data is not correct. For the first 10 minutes it shows 474598.4 Tb/s (all data) for my stream while after 10 minutes it normalises to the real throughput of 22.8 Gb/s (all data). The flows count is good from the start, i use a 1/100 sampling rate. |
Moreover, i just tested some query's on the live profile and with a separate profile (malicious-inra2). The errors only occur in the malicious-inra2 profile and not in the live profile. But again, don't see this problem yet in the last couple of hours, so will monitor that for longer to see if i get it again. |
The nffile code has not changed since Sept last year. So there is nothing, I would suspect. nffile is one of the very central parts, which are used for each run. Moreover, if it occurs only on some profiles, this does not make any sense to me. As of restart nfcapd, there may be sampling involved, which your exporter sends only in periodic intervals. |
Don't see the decompression errors anymore, ticket closed. |
Hi there, i'm having this same problem with many files. Take a look:
And there's a lot of free space in disk:
I have other profiles but it's not occurring with them all. Nfdump version is Is there something I can do? |
Could you pleas send my by email to the address in the AUTHORS file: From profile live: file nfcapd.202304240640 Thanks! |
Unfortunately I have erased AKAMAI profile, but I have other with same problem. Take a look:
I sent you files from live and facebook profiles, together with the filter used! |
Last commit solved the problem! I'm running for 2 days and I don't see any error :D Thank you so much! |
The only affected binary was nfprofile. nfcapd always collected correctly. |
Getting lots of these errors with nfdump version 1.7.1 when i do a query like this:
nfdump -M /data/nfsen/profiles-data/malicious-inra2/in:out -T -R 2023/01/08/nfcapd.202301081500:2023/01/09/nfcapd.202301090625 -n 10 -s ip/flows
Uncompress_Block_LZO() error decompression failed in nffile.c line 212: LZO error: -6
Any advice how to get rid of these errors?
The text was updated successfully, but these errors were encountered: