You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm attempting to create a backup solution. It needs to be capable of incremental copies using s3cmd sync and to preserve file permissions. (I can ignore the fact that directory permissions aren't preserved at the moment). I'm writing from Linux systems to an S3 bucket in to different "folders" in the one backup bucket.
Unfortunately in all my testing the s3cmd sync command copies every single file every time, even if the file has not changed in any way. This is not what I had expected.
The --skip-existing switch skips all existing files, even if they have changed. (Oddly the debug shows that s3cmd does all the slow md5sums checks first and then ignores them.)
How can I use s3cmd to get an rsync-like sync from Linux to an s3 bucket please?
EDIT: I'm currently using s3cmd version 2.2.0 from the Ubuntu repos, but will update if there are relevant bugfixes. Thanks!
The text was updated successfully, but these errors were encountered:
I'm unable to diagnose this, but suspect that our S3 bucket configuration could be the cause.
A single bucket is set up with a "folder" for each user to backup in to, using a similar method to this AWS blog.
The bucket has versioning enabled but for security reasons does not allow users to delete or modify objects. This way if a user account is compromised the attacker can't remove or encrypt their backups.
Will s3cmd sync work if it cannot write or delete pre-existing files in a bucket?
I can confirm that the problem I see seems related to being unable to delete files. I set up a basic bucket with versioning enabled but otherwise defaults. With this test bucket "incremental" updates with s3cmd sync appear to work reliably.
I'm attempting to create a backup solution. It needs to be capable of incremental copies using
s3cmd sync
and to preserve file permissions. (I can ignore the fact that directory permissions aren't preserved at the moment). I'm writing from Linux systems to an S3 bucket in to different "folders" in the one backup bucket.Unfortunately in all my testing the
s3cmd sync
command copies every single file every time, even if the file has not changed in any way. This is not what I had expected.The
--skip-existing
switch skips all existing files, even if they have changed. (Oddly the debug shows that s3cmd does all the slow md5sums checks first and then ignores them.)How can I use
s3cmd
to get an rsync-like sync from Linux to an s3 bucket please?EDIT: I'm currently using s3cmd version 2.2.0 from the Ubuntu repos, but will update if there are relevant bugfixes. Thanks!
The text was updated successfully, but these errors were encountered: