-
-
Notifications
You must be signed in to change notification settings - Fork 31.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document that lru_cache uses hard references #88476
Comments
# Problem the functools.lru_cache decorator locks all arguments to the function in memory (inclusing self), causing hard to find memory leaks. # Expected I had assumed that the lru_cache would keep weak-references and that when an object is garbage colected, all its cache entries expire as unreachable. This is not the case. # Solutions
I will try to make a PR for this. |
Using a weak dictionary is not a correct solution as the cache must take string ownership of the arguments and return value to do it's job properly. Moreover, there are many types in Python that don't support weak references so this will be a backwards incompatible change and limiting the cache quite a lot. |
@wouter However, I noticed that the current doc doesn't mention the strong reference behavior anywhere. So I think your suggestion to amend the docs is an improvement, thanks! |
Also note that many important objects in Python are not weak referenceable, tuples for example. |
I'm thinking of a more minimal and targeted edit than what is in the PR. Per the dev guide, we usually word the docs in an affirmative and specific manner (here is what the tool does and an example of how to use it). Recounting a specific debugging case or misassumption usually isn't worthwhile unless it is a common misconception. For strong versus weak references, we've had no previous reports even though the lru_cache() has been around for a long time. Likely, that is because the standard library uses strong references everywhere unless specifically documented to the contrary. Otherwise, we would have to add a strong reference note to everything stateful object in the language. Another reason that it likely hasn't mattered to other users is that an lru cache automatically purges old entries. If an object is not longer used, it cycles out as new items are added to the cache. Arguably, a key feature of an LRU algorithm is that you don't have to think about the lifetime of objects. I'll think it a for a while and will propose an alternate edit that focuses on how the cache works with methods. The essential point is that the instance is included in the cache key (which is usually what people want). Discussing weak vs strong references is likely just a distractor. |
Agreed! I will let the PR to you :) |
It may useful to link back to @cached_property() for folks wanting method caching tied to the lifespan of an instance rather than actual LRU logic. |
This is a full duplicate of bpo-19859. Both ideas of using weak references and changing documentation were rejected. |
I saw the thread but the idea was rejected by @rhettinger who seems to suggest the changes in the documentation this time himself. Maybe he has changed his mind, in which case he can explain the circumstances of his decisions if he wants. |
Reading this bug thread last week made me realize we had made the following error in our code: class SomethingView():
@functools.lru_cache()
def get_object(self):
return self._object Now, as this class was instantiated for every (particular kind of) request to a webserver and this method called (a few times), the lru_cache just kept filling up and up. We had been having a memory leak we couldn't track down, and this was it. I think this is an easy mistake to make and it was rooted, not so much in hard references though (without that though, it would have not leaked memory) but because of the fact the cache lives on the class and not the object. |
See PR 26731 for a draft FAQ entry. Let me know what you think. |
PR 26731 looks very good to me. My only comment, which I am not sure is worthy of adding/is a general lru_cache thing, that "instances Not sure whether that's too specific to the problem we encountered and we are all consenting adults and should infer this or it is helpful: leave it up to your/other people's judgement. P.S. In the programming.rst there is also the "Why are default values shared between objects?" section which actually uses default values to make its own poor version of a cache. It should probably at least mention lru_cache could be used (unless you particularly need callers to be able to pass their own cache). |
Your words aren't making any sense to me. The default |
I clearly was missing some words there Raymond. I meant, if one has set maxsize=None. |
(but consenting adults, setting max_size=None for "efficiency", you better be sure what you are doing in a long running process and making sure it cannot grow unbounded.) |
The docs already say, "If maxsize is set to None, the LRU feature is disabled and the cache can grow without bound." |
Adding a weak referencing recipe here just so I can find it in the future. -------------------------------------------------------------------------- import functools
import weakref
def weak_lru(maxsize=128, typed=False):
"""LRU Cache decorator that keeps a weak reference to "self".
def decorator(func):
ref = weakref.ref
@functools.lru_cache(maxsize, typed)
def _func(_self, /, *args, **kwargs):
return func(_self(), *args, **kwargs)
@functools.wraps(func)
def wrapper(self, /, *args, **kwargs):
return _func(ref(self), *args, **kwargs)
return wrapper
return decorator |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: