-
Notifications
You must be signed in to change notification settings - Fork 45.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Executing python code directly from AI output #286
Comments
What do you mean. It's already s function in the project. |
Oh, really? My bad then 😄 What's the name of the command? |
execute_python_file(arguments["file"]) |
Is the The feature I'm proposing is the direct execution of code. " |
Same problem here too : NEXT ACTION: COMMAND = execute_python_file ARGUMENTS = {'file': '<path_to_python_script>'}
Executing file '<path_to_python_script>' in workspace 'auto_gpt_workspace'
SYSTEM: Command execute_python_file returned: Error: Invalid file type. Only .py files are allowed. I asked it to create a python script and it just try to execute "path_to_python_script". |
Closing as |
Not a duplicate. This is a feature request for direct python code execution. |
That is already implemented |
As As I said:
If you want to close as "won't do", that's okay. But I don't think it's a duplicate. |
Ah, thanks for elaborating! I think this is something we could add without too much effort. The tricky thing is to properly sandbox it, in a way equivalent to |
ChatGPT is less confused by this phrasing From my own observations and others (ie #101 and #286) ChatGPT seems to think that `evaluate_code` will actually run code, rather than just provide feedback. Since changing the phrasing to `analyze_code` I haven't seen the AI make this mistake. --------- Co-authored-by: Reinier van der Leer <[email protected]>
this has more to do with in-memory execution of code that isn't written to disk, I suppose ? |
Yeah, it is somewhat related. I think this issue might supersede that one/that issue might supersede this one depending on how it's implemented. Although this one is a bit more general (e.g. the Agent might spin up a python instance just to do some calculations, so nothing necessarily related to an API). Edit: As a matter of fact, now that I think about it, I think these should be separate tasks. Meaning, a |
search_for_api would be a specialization of a do_research (crawl) command, where as api could be either a classical API or a networking API Some of us have succeeded getting Agent-GPT to write code by exploring API docs already - my recent experiments made it download the github API docs and come up with a CLI tool to filter PRs based on excluding those that are touching the same paths/files: master...Boostrix:Auto-GPT:topic/PRHelper.py While this is trivial in nature, it can already be pretty helpful to identify PRs that can be easily reviewed/integrated, because they're not stepping on anyone's toes. And it would be easy to extend as well. The point being, having some sort of search API / extend yourself mechanism is exactly what many folks here are suggesting when it comes to "self-improving", in its simplest form: adding features without having to write much/any code. So, thinking about it, I am inclined to think that commands should be based on classes that can be extended - a research command would be based on a crawler/spider class [http requests: #2730), and a find_api command would be based on the research command [class] That way, you can have your cake and eat it, while also ensuring that the underlying functionality (searching/exploring the solution space), is available for other use-cases - like the idea of hooking up the agent to a research paper server (#826 ) or making it process pdf files (#1353 ) Commands in their current form worked, but to support scaling and reduce code rot, it would make sense to identify overlapping functionality and then use a layered approach for common building blocks. The "API explorer" you mentioned could also be API based itself, so there is no need to go through HTML scraping - but some folks may need exactly that, so a scraping mechanism would be a higher-level implementation of a crawler #2730 Related talks collated here: #514 (comment) |
This issue was closed automatically because it has been stale for 10 days with no activity. |
regarding the API explorer idea: #5536 |
ChatGPT is less confused by this phrasing From my own observations and others (ie Significant-Gravitas#101 and Significant-Gravitas#286) ChatGPT seems to think that `evaluate_code` will actually run code, rather than just provide feedback. Since changing the phrasing to `analyze_code` I haven't seen the AI make this mistake. --------- Co-authored-by: Reinier van der Leer <[email protected]>
I've encountered the same problem as #101. GPT Thinks that
evaluate_code
will execute python:On one hand, that's a bug, which is addressed on #101.
On the other hand... that's a very interesting idea from GPT. Perhaps giving it the ability to execute python code could allow it to execute a lot of tasks in a way more dynamic way.
The text was updated successfully, but these errors were encountered: