-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consider switching basic ipython magic to jupyter-ai
#30
Comments
I am not familiar with ipython magics but if we want code generation and execution it is already possible by using I have an example in LLaMP generating and running ASE EMT relaxation as implemented here. Regarding the streaming, it is possible to turn off if the code generation speed is a concern (although I don't think there be much difference) |
Thanks Yuan, I see that this REPL tool can be useful as well. I had a slightly different use case in mind, though - instead of an agent that executes the code directly I'd like to simply have the language model generate code for me in the next notebook cell so that I can review and edit it before running it myself. This will not use the calculation agents (or REPL), but it might in principle use others, e.g. to read documentation of packages etc. In most cases, however, the added value here vs vanilla ChatGPT/Github copilot comes from the engineered system prompt, which knows about the simulation packages that are installed in my environment and, besides infos on how to use the langchain tool function, could also include information on how to interact with the Python API of the underlying simulation software, how to best run certain calculations, etc. This use case is slightly different from the one in the demo video, but I think it has great short-term potential (I would use such a tool). |
That is a very interesting idea! I am not sure if someone has implemented this before, but imo it is possible by
Will be interested in developing this :) |
Motivation
Enabling experimentalists to run simulations is a great long-term goal, but requires substantial work in vetting workflows, documenting them, and then actual testing with a broad audience of experimentalists in order to weed out edge cases and earn trust.
A much lower-hanging fruit is to make computational scientists more productive (and dogfooding always makes a product better). For this use case, hardcoded agents are typically too restrictive - instead, we want langsim to be a Copilot that helps us write Python code for running simulations.
This is possible via the ipython magics (basic implementation for an immediately executable "code response" here), but once you get into the details (e.g. streaming responses rather than having to wait for a lengthy code completion) it starts to get tricky.
jupyter-ai follows the same route, but adds extra features, including streaming responses, as well as other nifty UI integrations (e.g. copilot in side bar).
Downsides
jupyter-ai
was not supported on python 3.12Steps
As far as I understand, in order to connect our setup to
jupyter-ai
, we need to create a "Jupyter AI module", for which they offer a cookiecutter.Jupyter AI already uses langchain, so that should help with the integration, but when I briefly looked into this during the hackathon (see code) was at the level of
langchain_core.language_models.llms.LLM
rather than at the level of agents/agent executors that we use in langsim.I was not able to quickly determine whether this poses a problem; perhaps @chiang-yuan can give some pointers on whether establishing this link is straightforward or whether coupling agents to jupyter-ai is difficult with the current implementation.
The text was updated successfully, but these errors were encountered: