Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

management of large log files. #142

Closed
davidvossel opened this issue Feb 26, 2015 · 9 comments
Closed

management of large log files. #142

davidvossel opened this issue Feb 26, 2015 · 9 comments

Comments

@davidvossel
Copy link
Contributor

For pacemaker and corosync, when debug/trace logging levels are in use the log files get huge. On long running systems, it is possible for the log files to actually fill the disk.

An idea we have to help with this is to split logfiles after they reach a certain size and compress the archived files. I'm unsure how feasible this feature request is. It needs research.

@fabbione
Copy link
Member

On 2/26/2015 4:31 PM, David Vossel wrote:

For pacemaker and corosync, when debug/trace logging levels are in use
the log files get huge. On long running systems, it is possible for the
log files to actually fill the disk.

An idea we have to help with this is to split logfiles after they reach
a certain size and compress the archived files. I'm unsure how feasible
this feature request is. It needs research.

This sounds like a logrotate issue to me. They get huge on demand (admin
triggered) and hence admin should take care of it (IMHO).

The problem of moving and renaming/compressing logs is that can
potentially conflicts with logrotate processing of logs.

Splitting the log file is easy, compressing it, while code wise is easy,
it still requires a separate thread to do it (don´t want to block for a
long time) etc. etc. that makes it all very interesting in the long run
if something crashes while compressing...

Fabio


Reply to this email directly or view it on GitHub
#142.

@davidvossel
Copy link
Contributor Author

@fabbione yep, I agree about the compression. This is for logrotate. How about a libqb feature that guarantees the logfile will never get over X bytes long. Log truncating.

@fabbione
Copy link
Member

On 2/26/2015 4:51 PM, David Vossel wrote:

@fabbione https://github.com/fabbione yep, I agree about the
compression. This is for logrotate. How about a libqb feature that
guarantees the logfile will never get over X bytes long. Log truncating.

hmmm how would you deal with log entries dropped by size?

I think the best way would be a new log api that could potentially do:

  • log_open(/var/log/app%ROTATE_NUMBER%.log, max_size)

once the log file >= max_size, then move to %ROTATE_NUMBER%+1.log

Being a new function/feature, logrotate snippets can be adapted to deal
with the new log names.

logrotate by defaults add a suffix to .log, so the 2 number wouldn´t
stomp on each other.

It´s also possible to consider moving files around with the same
sequence number of logroatate, but that might be confusing.


Reply to this email directly or view it on GitHub
#142 (comment).

@al45tair
Copy link

If the log file gets too large, it's also possible for corosync to fail to start, alleging a configuration failure, with the message

parse error in config: Can't open logfile '/var/log/cluster/corosync.log' for reason: Value too large for defined data type (75).

This is probably best addressed in corosync itself (I'm just about to file an issue there), but it seems relevant to this issue too.

@chrissie-c
Copy link
Contributor

Thanks for mentioning it. I'm not sure we can do much about that here (aside from the management of large files which this issue is really about). If the open call returns an error that we can't deal with then passing it back to the caller seems sensible behaviour to me.

@kgaillot
Copy link
Contributor

This can be solved on the application's side by configuring logrotate appropriately:

  • use maxsize in the logrotate config
  • optionally use dateformat -%Y%m%d-%H in the logrotate config (not needed with newer versions of logrotate, but older versions will fail to rotate with "destination ... already exists, skipping rotation" if the log has already been rotated the same day)
  • optionally set logrotate to run more frequently (e.g. move /etc/cron.daily/logrotate to /etc/cron.hourly/logrotate), since logrotate will not notice the file size until its next run

It might still be useful to implement the equivalent in libqb to be able to rotate sooner than logrotate's next run, but that seems a small benefit for a lot of work, so I'd be inclined to just close this.

@chrissie-c
Copy link
Contributor

I'll add an API call to re-open the log, that's pretty easy to do and allows for applications to do their own log management (it'll have the option to open a new filename). If you don't need it, don't use it :)

@kgaillot
Copy link
Contributor

Definitely a good idea, but that solves issue #239 rather than this :)

@chrissie-c
Copy link
Contributor

Good point. I'll close this one then as it doesn't seem to be generally regarded as a good idea. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants