You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I have noticed that on one of my targets prunes have not been happening since a few weeks. Backups were working fine. After checking the logs, prune does indeed seem to fail with repo already locked, waiting up to 0s for the lock & exit status 11 & Error: 1 errors were found. However, this has not triggered my Healthchecks.io URL which I have configuried in the failure hook. Not sure if prune errors are excluded by design from the failure hook or if this actually is a bug?
Expected behavior
If the forget / prune fails after a successful backup, the failure hook should be executed, not the success hook.
Environment
OS: Synology DSM 7
Version: 1.8.2
Install: Docker
Additional context
Log file:
Using config: /config/.autorestic.yaml
Using env: /config/.autorestic.env
Using lock: /config/.autorestic.lock.yml
Backing up location "limited"
Running hooks
> curl -m 10 --retry 5 https://hc-ping.com/XXX/start
> Executing: /bin/bash -c curl -m 10 --retry 5 https://hc-ping.com/XXX/start
OK
Backend: target-limited
> Executing: /usr/bin/restic backup --limit-upload 3000 --exclude */#recycle --exclude */.Trash-1000 --exclude */Thumbs.db --exclude */@eaDir --tag ar:location:limited /mnt/source
using parent snapshot 12689a9a
"Files: 316 new, 919 changed, 744156 unmodified""Dirs: 140 new, 613 changed, 137152 unmodified"
Added to the repository: 2.600 GiB (2.317 GiB stored)
"processed 745391 files, 3.339 TiB in 21:24"
snapshot 530b2b2c saved
Running hooks
Running hooks
> curl -m 10 --retry 5 https://hc-ping.com/XXX/0
> Executing: /bin/bash -c curl -m 10 --retry 5 https://hc-ping.com/XXX/0
OK
Forgetting for location "limited"
For backend "target-limited"> Executing: /usr/bin/restic forget --tag ar:location:limited --prune --limit-upload 3000 --keep-daily 7 --keep-monthly 3 --keep-weekly 4
"repo already locked, waiting up to 0s for the lock"exit status 11
Backing up location "full"
Running hooks
> curl -m 10 --retry 5 https://hc-ping.com/YYY/start
> Executing: /bin/bash -c curl -m 10 --retry 5 https://hc-ping.com/YYY/start
OK
Backend: target-full
> Executing: /usr/bin/restic backup --limit-upload 3000 --exclude */#recycle --exclude */.Trash-1000 --exclude */Thumbs.db --exclude */@eaDir --tag ar:location:full /mnt/source /mnt/source-additional
using parent snapshot b3567c30
"Files: 383 new, 936 changed, 760519 unmodified""Dirs: 173 new, 665 changed, 137066 unmodified"
Added to the repository: 3.022 GiB (2.688 GiB stored)
"processed 761838 files, 3.593 TiB in 25:48"
snapshot 78c7d254 saved
Running hooks
Running hooks
> curl -m 10 --retry 5 https://hc-ping.com/YYY/0
> Executing: /bin/bash -c curl -m 10 --retry 5 https://hc-ping.com/YYY/0
OK
Forgetting for location "full"
For backend "target-full"> Executing: /usr/bin/restic forget --tag ar:location:full --prune --limit-upload 3000 --keep-daily 7 --keep-yearly 4 --keep-monthly 12 --keep-weekly 4
"Applying Policy: keep 7 daily, 4 weekly, 12 monthly, 4 yearly snapshots"
keep 13 snapshots:
# removed for readability
weekly snapshot /mnt/source-additional
monthly snapshot
yearly snapshot
----------------------------------------------------------------------------------------------------------------------------
13 snapshots
remove 1 snapshots:
ID Time Host Tags Paths
-----------------------------------------------------------------------------------------------
5ddf80d3 2024-07-30 00:05:03 autorestic-server ar:location:full /mnt/source
/mnt/source-additional
-----------------------------------------------------------------------------------------------
1 snapshots
[0:00] 100.00% 1 / 1 files deleted
"1 snapshots have been removed, running prune"
loading indexes...
loading all snapshots...
finding data that is still in use for 13 snapshots
[3:50] 100.00% 13 / 13 snapshots
searching used packs...
collecting packs for deletion and repacking
[5:33] 100.00% 281263 / 281263 packs processed
to repack: 2400 blobs / 29.088 MiB
this removes: 399 blobs / 2.327 MiB
to delete: 0 blobs / 0 B
total prune: 399 blobs / 2.327 MiB
remaining: 4732944 blobs / 4.554 TiB
unused size after prune: 233.094 GiB (5.00% of remaining size)
repacking packs
[0:17] 100.00% 3 / 3 packs repacked
rebuilding index
[0:08] 100.00% 36 / 36 indexes processed
[0:00] 100.00% 6 / 6 old indexes deleted
removing 3 old packs
[0:00] 100.00% 3 / 3 files deleted
done
Done
Done
Error: 1 errors were found
Describe the bug
I have noticed that on one of my targets prunes have not been happening since a few weeks. Backups were working fine. After checking the logs, prune does indeed seem to fail with
repo already locked, waiting up to 0s for the lock
&exit status 11
&Error: 1 errors were found
. However, this has not triggered my Healthchecks.io URL which I have configuried in thefailure
hook. Not sure if prune errors are excluded by design from thefailure
hook or if this actually is a bug?Expected behavior
If the forget / prune fails after a successful backup, the
failure
hook should be executed, not thesuccess
hook.Environment
Additional context
Log file:
Config file:
The text was updated successfully, but these errors were encountered: