Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: dictionary changed size during iteration #945

Closed
LocutusOfBorg opened this issue Jan 2, 2018 · 13 comments
Closed

RuntimeError: dictionary changed size during iteration #945

LocutusOfBorg opened this issue Jan 2, 2018 · 13 comments
Assignees

Comments

@LocutusOfBorg
Copy link
Contributor

Hello, I'm forwarding from bugs.debian.org/885914

Package: s3cmd
Version: 2.0.1-1
Severity: grave
Justification: renders package unusable

Dear Maintainer,

s3cmd put is failing with:

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
        An unexpected error has occurred.
      Please try reproducing the error using
      the latest s3cmd code from the git master
      branch found at:
        https://github.com/s3tools/s3cmd
      and have a look at the known issues list:
        https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions
      If the error persists, please report the
      following lines (removing any private
      info as necessary) to:
      [email protected]
   
   
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
   
    Invoked as: /usr/bin/s3cmd --ssl --config=/tmp/tmp.akNQcIMsdc --acl-private put --cache-file=«path»/.s3cmd.cache --encrypt «files...» s3://«bucket»/
    Problem: <class 'RuntimeError: dictionary changed size during iteration
    S3cmd:  2.0.1
    python:  3.6.4 (default, Dec 19 2017, 14:09:48)
    [GCC 7.2.0]
    environment LANG=en_GB.UTF-8

    Traceback (most recent call last):
      File "/usr/bin/s3cmd", line 3073, in <module>
        rc = main()
      File "/usr/bin/s3cmd", line 2989, in main
        rc = cmd_func(args)
      File "/usr/bin/s3cmd", line 364, in cmd_object_put
        local_list, single_file_local, exclude_list, total_size_local = fetch_local_list(args, is_src = True)
      File "/usr/lib/python3/dist-packages/S3/FileLists.py", line 353, in fetch_local_list
        _maintain_cache(cache, local_list)
      File "/usr/lib/python3/dist-packages/S3/FileLists.py", line 311, in _maintain_cache
        cache.purge()
      File "/usr/lib/python3/dist-packages/S3/HashCache.py", line 49, in purge
        for i in self.inodes[d].keys():
    RuntimeError: dictionary changed size during iteration

/tmp/tmp.akNQcIMsdc contains:
    [default]
    access_key = «access key»
    acl_public = False
    bucket_location = EU
    cloudfront_host = cloudfront.amazonaws.com
    cloudfront_resource = /2008-06-30/distribution
    default_mime_type = binary/octet-stream
    delete_removed = False
    dry_run = False
    encoding = UTF-8
    encrypt = False
    force = False
    get_continue = False
    gpg_command = /usr/bin/gpg
    gpg_decrypt = %(gpg_command)s --homedir /usr/local/backups/gpghome -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
    gpg_encrypt = %(gpg_command)s --homedir /usr/local/backups/gpghome --cipher-algo AES256 -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
    gpg_passphrase = «passphrase»
    guess_mime_type = True
    host_base = s3.amazonaws.com
    host_bucket = %(bucket)s.s3.amazonaws.com
    human_readable_sizes = False
    list_md5 = False
    preserve_attrs = True
    progress_meter = True
    proxy_host =
    proxy_port = 0
    recursive = False
    recv_chunk = 4096
    secret_key = «secret key»
    send_chunk = 4096
    simpledb_host = sdb.amazonaws.com
    skip_existing = False
    use_https = True
    verbosity = WARNING
    socket_timeout=60

Downgrading to 1.6.1-1 frm Stretch without changing any other packages works
ok.

Thanks,
Ian.

-- System Information:
Debian Release: buster/sid
  APT prefers testing
  APT policy: (990, 'testing'), (500, 'unstable'), (500, 'stable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386, armhf, armel, arm64

Kernel: Linux 4.13.0-1-amd64 (SMP w/4 CPU cores)
Locale: LANG=en_GB.UTF-8, LC_CTYPE=en_GB.UTF-8 (charmap=UTF-8), LANGUAGE=en_GB.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages s3cmd depends on:
ii  python3          3.6.4~rc1-2
ii  python3-dateutil  2.6.1-1
ii  python3-magic    1:5.32-1

s3cmd recommends no packages.

s3cmd suggests no packages.

-- no debconf information
@fviard fviard self-assigned this Jan 2, 2018
@LocutusOfBorg
Copy link
Contributor Author

Hello, do you have any news please?

@fviard
Copy link
Contributor

fviard commented Jan 18, 2018

@LocutusOfBorg sorry for my lack update. I'm a little behind lastly because solving one specific hard issue slowed me down for some time, but I see the origin of this one and i hope to have it fixed with other ones till this week end.

@LocutusOfBorg
Copy link
Contributor Author

thanks a lot! don't worry!

@vwbusguy
Copy link

vwbusguy commented Feb 20, 2018

I'm seeing the same thing now on Fedora 27. Looks like the initial sync worked fine, but subsequent syncs with the same file cache generates this error.

$ rpm -q s3cmd
s3cmd-2.0.1-1.fc27.noarch
$ s3cmd --version
s3cmd version 2.0.1

@vwbusguy
Copy link

One workaround might be to do a deepcopy of the outputted dictionary. My guess is that, since this is data backed up from a live server, some file was touched while the cached data is being verified. Doing a copy of the dictionary data should hopefully prevent this fs/iterator race condition.

@tomchiverton
Copy link

Same error here, FC27 too.

I'm backing up static files. Nothing changed in the tree.

@tomchiverton
Copy link

workaround was to remove the file pointed too by --cache-file

2nd and 3rd syncs (after the cache is rebuilt) seem fine.

@tomchiverton
Copy link

Didn't stay fixed. Guess s3cmd sync is a gonner then, given aws cli has this feature now (minus the cache).

@lcrea
Copy link

lcrea commented Oct 30, 2018

Same problem on macOS too.
Temporary solved with the same workaround suggested by @tomchiverton : removing the --cache-file parameter.

@NotTheDr01ds
Copy link

NotTheDr01ds commented Jun 24, 2019

It seems to me that the problem is a Python 3 behavior which is documented in this Stack Overflow post, along with the solution.

It works reliably for me after changing HashCache.py's purge function so that, for example:

for d in self.inodes.keys(): becomes for d in tuple(self.inodes):

And on each subsequent "for" loop in purge(). I would guess that the mark_all_for_purge function would have the same issue.

I tested on Python 3, but not 2.

I'll try to get a pull request in with the changes, and some more testing.

@Altycoder
Copy link

I also have this problem on Arch LTS, I've tried both the s3cmd in the Arch repos which is version '2.0.2-2' and also the latest version via pip which is version '2.0.2'.

I have just switched from Debian Buster to Arch LTS (for other reasons) and s3cmd via pip was working just fine on Debian (if I recall Debian supports both Python 2 and 3).

Arch is currently at Python 3.7.4

@virtusense-trisha
Copy link

Doing the tuple () thing from NotTheDr01ds worked for me, I just made that change in HashCache.py where the backtrace showed it crashing.

It's really gross and pip will break it next time I update but it worked. I have 13,000+ files I want to sync so it takes a few minutes if I can't use the cache file.

NotTheDr01ds pushed a commit to NotTheDr01ds/s3cmd that referenced this issue Jan 30, 2020
NotTheDr01ds pushed a commit to NotTheDr01ds/s3cmd that referenced this issue Jan 30, 2020
NotTheDr01ds pushed a commit to NotTheDr01ds/s3cmd that referenced this issue Jan 30, 2020
fviard pushed a commit that referenced this issue Mar 22, 2020
@fviard
Copy link
Contributor

fviard commented Mar 22, 2020

Fix merged thanks to @NotTheDr01ds !

@fviard fviard closed this as completed Mar 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants