You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have very large files (20-100GB) with somewhat slow IO. I would love to see lazy md5 evaluation. It can takes tens of minutes just to calculate the hashes before uploading starts. Combined with uploads of multipart segments, uploads would be so much faster.
The text was updated successfully, but these errors were encountered:
I understand perfectly your issue.
Sadly, at the moment, calculating the MD5 of a file is the only way to know if the file is the same for a "sync".
But on my todo list, I have the idea of the feature that you will be able to use filesystem metadata to be able to not recalculate the md5 when nothing was change.
To be noted, in the current version, you can already disable md5 calculation and just rely on "file size" and "last modified date" metadata, to go faster.
I have very large files (20-100GB) with somewhat slow IO. I would love to see lazy md5 evaluation. It can takes tens of minutes just to calculate the hashes before uploading starts. Combined with uploads of multipart segments, uploads would be so much faster.
The text was updated successfully, but these errors were encountered: