You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is likely related to #238 and #200: some recent torch wheels are >= 2.5GB, and pip appears to download them repeatedly without hitting the cache. My only SWAG so far is that this is because the body itself overflows msgpack's signed 32 bit limit on binary objects, per the spec.
Looked some more into this: the person who reported this said that torch was serving 2 GB+ wheels, but I can't see any: https://pypi.org/project/torch/#files
That being said, I suspect this is still causing unnecessary cache misses due to #200: we end up storing large downloads (such as 700 MB torche wheels) that never get "hit", since the default msgpack load behavior is to limit binary bodies to ~100MB: https://msgpack-python.readthedocs.io/en/latest/api.html
Hmm, I've still been unable to trigger this: it looks like msgpack.loads(payload) sets its maximum limits based on len(payload), so we should never really hit a binary object limit in practice.
Opening this as a reminder to myself.
This is likely related to #238 and #200: some recent
torch
wheels are>= 2.5GB
, andpip
appears to download them repeatedly without hitting the cache. My only SWAG so far is that this is because thebody
itself overflowsmsgpack
's signed 32 bit limit on binary objects, per the spec.Haven't fully diagnosed yet.
See: https://news.ycombinator.com/item?id=40659973
The text was updated successfully, but these errors were encountered: