-
Notifications
You must be signed in to change notification settings - Fork 4.7k
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Our company has over 20TB of data, and now the Garbage Collection task is error. #18701
Comments
any logs? |
Could you check the jobservice dashboard status whether the GC job is running? |
GARBAGE_COLLECTION pending count 2 , latency 117hrs 54min 0sec |
The errored GC job is still running in a goroutine, it blocks other GC jobs from running before complete. |
It is not recommended to delete images or blobs with third party tools, it might cause a discrepancy between the database and file system. |
This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days. |
You could try 2.9.0, with this feature #18855, the total GC time could be shortened. |
Our company's Harbor repository has been in use for 7-8 years, and we have gradually upgraded it to version 2.7. We have cleaned up data several times in the past, but recently when running the Garbage Collection task, it shows "error" after about 2 days. I suspect the issue may be due to the large data volume and a timeout occurring since it hasn't been cleaned for a while. Is there any solution that can help us clean up the data regularly? Currently, we have 20TB of data, which grows by approximately 1-2TB per month. The error message is as follows:
{"errors":[{"code":"NOT_FOUND","message":"{"code":10010,"message":"object is not found","details":"42e453cd73dc854255e30b54"}"}]} ~
The text was updated successfully, but these errors were encountered: