You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This seems to have started when I updated to 29.0.7 (from 28) a week ago.
I have some 100,000 files in NextCloud and the fileID was at something like 350,000 in oc_file_cache (first installation some 5 years ago)
Now in a week time (without adding a serious amount of files) the fileID is at 411,500+! That means that those two files are removed and placed again about 6 times a minute!
I have a script that checks for orphaned files in my S3 storage, there were hundreds of files with the exact same sizes of discovery and capabilities that were orphaned.. It looks like these two files are removed and created at an alarming rate, which seems to cripple the S3 storage structure..
The problem would probably be solved when the file is overwritten, instead of being removed and created again?
The text was updated successfully, but these errors were encountered:
This seems to have started when I updated to 29.0.7 (from 28) a week ago.
I have some 100,000 files in NextCloud and the fileID was at something like 350,000 in oc_file_cache (first installation some 5 years ago)
Now in a week time (without adding a serious amount of files) the fileID is at 411,500+! That means that those two files are removed and placed again about 6 times a minute!
I have a script that checks for orphaned files in my S3 storage, there were hundreds of files with the exact same sizes of discovery and capabilities that were orphaned.. It looks like these two files are removed and created at an alarming rate, which seems to cripple the S3 storage structure..
The problem would probably be solved when the file is overwritten, instead of being removed and created again?
The text was updated successfully, but these errors were encountered: