-
-
Notifications
You must be signed in to change notification settings - Fork 131
Backups fill up partition #220
Comments
Same issue, on both fallback units (yes I have 3 pi-holes). Backup files just fill up fs. |
* Add the addition of an Environment Path variable in Crontab (#212) * Add detection of missing path components and addition of path to crontab * Fix bug where \n is inserted literally * Fix `find` command invoke (Issue #220) (#223) * 3.4.5 Co-authored-by: Michael Thompson <[email protected]> Co-authored-by: benjaminfd <[email protected]> Co-authored-by: Michael Stanclift <[email protected]>
Sorry, adding onto the pile. My Raspberry Pi 4 running only Homebridge and PiHole (with Gravity Sync), stopped responding sometime in the past week or two. I logged in to find the root directory at 100%. Some searching revealed that Gravity-Sync's logs and backups had filled up the partition on a 64 GB SD card (after clearing past months, only 3.7 is used now). Is there a way to automatically clear out old backups and logs to prevent this? |
Version 3.4.5 includes the fix for the backup files not being removed. Make sure you are on the latest release. |
Excellent. Just ran the update command. Very quick! |
Just noticed this problem today. I had 26Gigs of backups taking up space on my poor secondary pihole. Thanks for the fix =) |
Still the same issue for me. |
manual pull did cleanup the backups now. |
Having the same issue. Running latest version 3.4.5 |
If you manually run the backup does the job complete without errors? |
If I run manually, all seems to work fine, also the cleanup of the backups. I see that the output of a manual run vs a cron job is totally different... MANUAL: CRON JOB: |
Please update to 3.4.6 and see if issues persist. |
Same issue here, currently on 3.4.7 and cron job execution does not purge backups...running the sync manually does purge all backups. I have BACKUP_RETAIN set to 0 in gravity-sync.conf file. |
@abjoseph can you post the output of |
@vmstan Update: It seems to be working now, working in the sense that the backups are purged (with BACKUP_RETAIN=0) after having ran the ./gravity-sync.sh script without any arguments and having it do a sync. Previously, when I initially updated gravity-sync and did a reboot, it did not purge the backups on any of the subsequent cron executions. P.S I'm not sure if setting BACKUP_RETAIN=0 just prevents it from creating any new backups as opposed to purging any existing backups and then creating one new one. Maybe I'll trying testing BACKUP_RETAIN=1, let than run for a while then change BACKUP_RETAIN=0, reboot and see if the script picks up the new value and begins purging the backups on its own without me running the gravity-sync.sh script manually. |
Also output of ./gravity-sync.sh info: (FYI - Pi-hole instance is running in a LXC container within Proxmox, as privileged.) root@pihole-b:~# ./gravity-sync/gravity-sync.sh info
[∞] Initalizing Gravity Sync (3.4.7)
[✓] Loading gravity-sync.conf
[✓] Evaluating arguments: INFO
========================================================
Local Software Versions
Gravity Sync 3.4.7
Pi-hole
Pi-hole version is v5.3.1 (Latest: v5.4)
AdminLTE version is v5.5.1 (Latest: v5.6)
FTL version is v5.8.1 (Latest: v5.9)
Linux 5.11.22-3-pve x86_64
bash 5.0.17(1)-release
OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020
rsync version 3.1.3 protocol version 31
sqlite3 3.31.1 2020-01-27 19:55:54 3bfa9cc97da10598521b342961df8f5f68c7388fa117345eeb516eaa837balt1
Sudo version 1.8.31
git version 2.25.1
Local/Secondary Instance Settings
Local Hostname: pihole-b
Local Pi-hole Type: default
Local Pi-hole Config Directory: /etc/pihole
Local DNSMASQ Config Directory: /etc/dnsmasq.d
Local Gravity Sync Directory: /root/gravity-sync
Local Pi-hole Binary Directory: /usr/local/bin/pihole
Local File Owner Settings: pihole:pihole
DNS Replication: Enabled (default)
CNAME Replication: Enabled (custom)
Verify Operations: Disabled (custom)
Ping Test: Enabled (default)
Backup Retention: 0 days (custom)
Backup Folder Size: 277K
Remote/Primary Instance Settings
Remote Hostname/IP: 192.168.1.11
Remote Username: root
Remote Pi-hole Type: default
Remote Pi-hole Config Directory: /etc/pihole
Remote DNSMASQ Config Directory: /etc/dnsmasq.d
Remote Pi-hole Binary Directory: /usr/local/bin/pihole
Remote File Owner Settings: pihole:pihole
========================================================
[∞] Gravity Sync INFO aborted after 0 seconds
|
Backups still cluttering the disk - doesnt matter whether i set BACKUP_RETAIN to '1' or '0'. |
Still experiencing the same thing. Just had to delete over 10 backups from last night after my weekly backup ran. BACKUP_RETAIN is set to '0'. It is syncing my adlists along with my blacklists and whitelists. |
Same issue here. (thought it was solved) |
On a related note: What are the possibilities for a setting to put those backups in another location like an attached HDD? Most of us use Pi-Hole and Gravity-Sync on Raspberry Pis and most likely with SD cards, which don't take kindly to massive rewrites like these backups. |
This is still an issue. I believe I've found the problem in the includes/gs-backup.sh file. 17 |
@litebright it might be worthwhile to create a pull request. This was introduced in 3.4.6 here |
I wish I had an answer for everyone on this, but it seems to consistently be cron not properly removing the files while a manual sync job does, and no other function of the script doesn't run correctly via cron. It's also not triggering for me on my systems. |
Anyone impacted by this, can you post the output of |
And I'm on
|
Alright I figured that was a long shot, same as what I'm testing on. |
Would it be worthwhile being a bit hacky, and creating a separate function for manual backups, appending _manual to the end or similar? That way, the manual and automated backups would 1. be separate and easily identifiable and 2. not be affected by one another when pruning occurs. |
|
Hi @vmstan
Thanks ! |
I have |
find (GNU findutils) 4.8.0 |
I do have the same issue and it looks like that the old issue #193 is somewhat similar - because I do get the "[✗] Reloading secondary FTLDNS services" error only, when the backups are not deleted. As a workaround for not filling up my disk, I added an contab entry to delete backups that are older than 59 minutes. Because my pihole stopped working several times - because auf 100% diskusage ;-).
UPDATE:
--> Hope that this can be fixed in the near future. |
Quick Update from my side - this seems to be fixed with version 3.4.8 |
Interested to see if anyone else who updates gets this corrected as well. |
I just had to delete 80 backups, 20 Gigs worth all from today. Version 3.4.8 |
Running the automate command only gives the option for how often to run and then exits out. Could this be part of the problem or is this normal in the newer versions? |
The problem appears to have been resolved with version 3.4.8, files are purged. However, the last line in /gravity-sync/logs/gravity-sync.cron indicates "Gravity Sync PULL aborted after 1 seconds". My "BACKUP_RETAIN='0'" and I am running on Debian 12 (Bullseye). |
If your last cron job didn't detect a change to sync, it still says aborted. I should probably change it to like "has nothing to do" to be more clear. |
How big is your database (one backup file?) |
Roughly 250Mb
Roughly 250Mb per file. |
Sad to report issue does not appear to be resolved with 3.4.8. 23 hours after my previous post and a review of the backup directory, backups are noted every 30 minutes, and "my" total sum is 3.0 GB. For your review and analysis, backups started at 0130 CST and my crontab -e is as follows: Other cron jobs (below) do not conflict with gravity-sync but are provided as reference. A review of the gravity-sync.cron file reveals the following, note there are no "purging" backups noted even though my "BACKUP_RETAIN='0'". |
Temporary Solution ... The last two days produced the exact same results as previously posted, starting at 0130. Here is how I got around the issue to prevent the backups from saving based on my settings: "BACKUP_RETAIN='0'". I will continue to test and revert back when this issue is resolved. Removed content from crontab -e (tmp crontab) and pasted it into /etc/crontab... /etc/crontab |
noticed line 19 in gs-backup.sh is commented out |
Correct me if I'm wrong but I think there has been a misunderstanding of pull / push (myself included), it should be best described as a forced pull / push. Pull / Push performs a backup first, then rsyncs to pihole, no changes check is performed. This means changing the cron from smart to pull for 1-way syncing causes (assuming it runs every 15m with 3 day retention) 96 backups a day and 288 backups after 3 days. At this point gravity-sync would start deleting old backups. Only Smart sync checks for changes before performing backup & rsync. The task_backup function in gs-backup.sh isn't used during sync, but only when manually running "gravity-sync backup", so "# backup_cleanup" would have no effect. |
Hi, I've been running PiHole on Debian 11, running Gravity Syink v3.4.8. Like the first post, my
I re-ran I also do have backups weekly for the current month, so I can pull a known working configuration from 2-3 weeks back if there's anything on it needed to check to help with figuring out what went wrong. I'm going to delete all the files in /backup/ then monitor this VM to see if it prunes properly. |
The push and pull operations are "semi-smart" already. It will look and see if there are changes between the components, and only do something if it does detect. However if it detects any changes it will attempt to move all three managed components in the direction indicated. |
I'm hitting this issue as well on 4.0.4 running Fedora 36 in podman. Backup Cleanup was disabled in backup.sh but made no change after enabling. I disabled the job last night in crontab -e as @rtc2022 mentions above and the backups stopped creating overnight. Unfortunately moving the job from crontab -e to /etc/crontab has the same effect as disabling in crontab -e and the backups won't run. For now, I've re-enabled the crontab -e sync job and added an additional manual cleanup job to /etc/crontab as @SecurityWho mentioned above until this is resolved. I did modify his to keep the 5 newest files and delete all the rest instead of clearing every hour (as my database doesn't change very often). Change the variable under | head -n -X | to the amount of new backups you'd like to keep. Also, seems the link you posted above is dead or incorrect - https://github.com/vmstan/gravity-sync/discussions/295 |
@Zanathoz that isn’t how 4.x works. I would suggest removing your existing install, including any crontab related to Gravity Sync, and reinstalling. |
Hi,
today I found my RPi that runs the gravity-sync with heavy disk activity.
I checked processes to find the sqlite backup task on top running for 48+ hours or so.
Vent checking free space on the partition and found it 100% filled up.
About 6GB of the 16GB partition were backups of gravity-sync.
I'm not sure, how much space I should provision for the backup folder but I think 6GB are a little bit too much.
It looks like this issue started after I updated to the most recent version of gravity-sync at the beginning of june.
As far as I understand, the backups rotate at 3 days by default.
Is it correct that every sync which is at 30 minutes is supposed to fire a backup?
That's a lot of traffic and a lot of space...
I deleted the backups and after a couple of minutes the sqlite process finished.
See the content of my backup directory:
(sorry for quoting, but the code tags have no line feed)
The text was updated successfully, but these errors were encountered: