-
-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infer starts returning empty results unexpectedly #1761
Comments
Can you generate the debug information and post it here? (with |
The full log is here https://github.com/pbudzyns/inference-issue/blob/main/debug.log |
The problem is basically this:
This is not going to be fixed, since Jedi just have to give up after a while. I'm sorry that this is annoying for you, but this is just how it is. I'm working on a Rust version of Jedi so this might get better in a different piece of software, but for the time being (1-2 years), this is probably going to stay the way it is. |
Thank you for this clarification, it's very useful. I see the warning leads to the following piece of code jedi/jedi/inference/syntax_tree.py Line 68 in 0457242
So as I understand the things stop after reaching maximum . I have tried changing its value and inference seems to keep working for the entire file. What would you think about making this number configurable? Even in a form of global variable as in jedi.inference.recursion . I would be more than happy to open appropriate pull request. Also if there are other contributions which I could make to address the problem mentioned in the comment, I'm ready to have a look on that. jedi/jedi/inference/syntax_tree.py Line 60 in 0457242
|
I don't like it. Those internal settings really serve a purpose and have been "battle-tested". If you really want to change them, just fork Jedi. The general issue is that Jedi just does not have caches for this kind of stuff, so there's really no solution unless #1059 is tackled. |
@davidhalter what would your thoughts be on having the API report this result slightly differently? So that "there are no values" is detectably different to "we gave up". Alternatively, what about allowing the client to specify the amount of time/space/other cost factor that it was willing for the request to consume? I realise that either is a bit of a bigger change, but it might provide a way for these sorts of things to be a bit more user-customisable for users who are using Jedi directly. |
@PeterJCLaw I agree that this would be desirable. It's just not very realistic at the moment IMO. I also don't think that people will actually use the API to do anything useful. It's definitely interesting for debugging, but how would you show the user that we gave up? As I said definitely desirable, but I doubt that such an API would be used a lot. I'm not going to implement it, but I'm 100% open to receive pull requests about this. I just don't think making internal limits like the recursion count public. There are various problems that you run into if you increase this number 300 to 600. It might lead to RecursionErrors or actual stack overflows that panic and stop the Python process without even a Python exception. So the solution there would probably be
Just releasing the current numbers are really not helping anyone, since we would just get different issues. I actually removed pretty much all of the "performance" related settings in |
Indeed, presenting it to end-user is definitely complicated and I'd tend to agree that it's not something which end-users are likely to be in a position to really change in a granular fashion. I think I'm imagining it more for extension authors, who might then expose to the user settings around the amount of resource that Jedi is allowed to take up. VSCode already sort-of does this -- you can set the amount of memory which Jedi is allowed to consume (though I don't actually know how that's implemented). The other use-case I can maybe see is for use of Jedi as a batch rather than interactive analysis tool. I don't know if/how much it's used in that context, but I can imagine that it might be. For the recursion limit specifically, maybe an approach which is a bit dynamic based on |
There's We're already pretty much at the limit. If you go higher you start seing more stack overflows. It's pretty annoying, because in Python some stack frames seem to be significantly larger than other ones, so it's pretty much impossible to guess what the limit is. So we're limited in both ways: The recursion limit cannot be increased, because then the process randomly stops (stack overflow) AND the recursion limit cannot be decreased because then we start to see CPython RecursionErrors. I understand that some of Jedi's architecture may not be optimal, but at the same time a stack size of 3000 Python frames is not that much... |
I have a class which originally was a qt application and contains quite come objects inside. What I'm trying to achieve is to go through the class and get inference for called methods (using
libcst
to visit execution nodes and get position metadata). In most of the cases it works well, however for the othersScript.infer
seems to stop working at some point.The code to reproduce the problem is a bit long (and the problem occurrence seems to be strictly correlated with code length) so I put it here: https://github.com/pbudzyns/inference-issue
The class that causes problems: https://github.com/pbudzyns/inference-issue/blob/main/example_class.py
To scan through the code I use this script: https://github.com/pbudzyns/inference-issue/blob/main/inference_run.py
To illustrate what is the problem here is the sample output. Initially inference works perfect but suddenly it's not able to return anything even for methods that were inferred successfully before.
I have tried to change
jedi.inference.recursion
orjedi.cache.time_cache()
settings but it seems to have no impact.Environment:
Ubuntu 16.04LTS
Python 3.7.9, 3.8.5
Jedi 0.18.0
The text was updated successfully, but these errors were encountered: