diff --git a/docs/code-execution/settings.mdx b/docs/code-execution/settings.mdx index 8d72fede2e..0373d0af9e 100644 --- a/docs/code-execution/settings.mdx +++ b/docs/code-execution/settings.mdx @@ -4,4 +4,4 @@ title: Settings The `interpreter.computer` is responsible for executing code. -[Click here to view `interpreter.computer` settings.](https://docs.openinterpreter.com/settings/all-settings#computer) \ No newline at end of file +[Click here](https://docs.openinterpreter.com/settings/all-settings#computer) to view `interpreter.computer` settings. diff --git a/docs/guides/basic-usage.mdx b/docs/guides/basic-usage.mdx index 8b679f2390..4080b5393d 100644 --- a/docs/guides/basic-usage.mdx +++ b/docs/guides/basic-usage.mdx @@ -28,18 +28,20 @@ title: Basic Usage ### Interactive Chat -To start an interactive chat in your terminal, either run `interpreter` from the command line: +To start an interactive chat in your terminal, either run `interpreter` from the command line or `interpreter.chat()` from a .py file. -```shell + + +```shell Terminal interpreter ``` -Or `interpreter.chat()` from a .py file: - -```python +```python Python interpreter.chat() ``` + + --- ### Programmatic Chat @@ -60,18 +62,22 @@ interpreter.chat("These look great but can you make the subtitles bigger?") ### Start a New Chat -In your terminal, Open Interpreter behaves like ChatGPT and will not remember previous conversations. Simply run `interpreter` to start a new chat: +In your terminal, Open Interpreter behaves like ChatGPT and will not remember previous conversations. Simply run `interpreter` to start a new chat. -```shell +In Python, Open Interpreter remembers conversation history. If you want to start fresh, you can reset it. + + + +```shell Terminal interpreter ``` -In Python, Open Interpreter remembers conversation history. If you want to start fresh, you can reset it: - -```python +```python Python interpreter.messages = [] ``` + + --- ### Save and Restore Chats @@ -80,13 +86,15 @@ In your terminal, Open Interpreter will save previous conversations to ` + +```shell Terminal interpreter --conversations ``` -In Python, `interpreter.chat()` returns a List of messages, which can be used to resume a conversation with `interpreter.messages = messages`: - -```python +```python Python # Save messages to 'messages' messages = interpreter.chat("My name is Killian.") @@ -97,6 +105,8 @@ interpreter.messages = [] interpreter.messages = messages ``` + + --- ### Configure Default Settings diff --git a/docs/guides/profiles.mdx b/docs/guides/profiles.mdx index 9c020ae775..da207e47d5 100644 --- a/docs/guides/profiles.mdx +++ b/docs/guides/profiles.mdx @@ -31,15 +31,3 @@ interpreter.loop = True There are many settings that can be configured. [See them all here](/settings/all-settings) - -## Helpful settings for local models - -Local models benefit from more coercion and guidance. This verbosity of adding extra context to messages can impact the conversational experience of Open Interpreter. The following settings allow templates to be applied to messages to improve the steering of the language model while maintaining the natural flow of conversation. - -`interpreter.user_message_template` allows users to have their message wrapped in a template. This can be helpful steering a language model to a desired behaviour without needing the user to add extra context to their message. - -`interpreter.always_apply_user_message_template` has all user messages to be wrapped in the template. If False, only the last User message will be wrapped. - -`interpreter.code_output_template` wraps the output from the computer after code is run. This can help with nudging the language model to continue working or to explain outputs. - -`interpreter.empty_code_output_template` is the message that is sent to the language model if code execution results in no output. diff --git a/docs/guides/running-locally.mdx b/docs/guides/running-locally.mdx index cea95fc878..43804fc896 100644 --- a/docs/guides/running-locally.mdx +++ b/docs/guides/running-locally.mdx @@ -49,6 +49,18 @@ interpreter.llm.api_base = "http://localhost:11434" interpreter.chat("how many files are on my desktop?") ``` +## Helpful settings for local models + +Local models benefit from more coercion and guidance. This verbosity of adding extra context to messages can impact the conversational experience of Open Interpreter. The following settings allow templates to be applied to messages to improve the steering of the language model while maintaining the natural flow of conversation. + +`interpreter.user_message_template` allows users to have their message wrapped in a template. This can be helpful steering a language model to a desired behaviour without needing the user to add extra context to their message. + +`interpreter.always_apply_user_message_template` has all user messages to be wrapped in the template. If False, only the last User message will be wrapped. + +`interpreter.code_output_template` wraps the output from the computer after code is run. This can help with nudging the language model to continue working or to explain outputs. + +`interpreter.empty_code_output_template` is the message that is sent to the language model if code execution results in no output. + Other configuration settings are explained in [Settings](/settings/all-settings). diff --git a/docs/language-models/introduction.mdx b/docs/language-models/introduction.mdx index cd454ae944..fd0d364af8 100644 --- a/docs/language-models/introduction.mdx +++ b/docs/language-models/introduction.mdx @@ -10,25 +10,20 @@ For this reason, we recommend starting with a **hosted** model, then switching t - + Connect to a hosted language model like GPT-4 **(recommended)** - + Setup a local language model like Mistral -
-
+
+
-Thank you to the incredible [LiteLLM](https://litellm.ai/) team for their efforts in connecting Open Interpreter to hosted providers. \ No newline at end of file + + Thank you to the incredible [LiteLLM](https://litellm.ai/) team for their + efforts in connecting Open Interpreter to hosted providers. + diff --git a/docs/language-models/local-models/janai.mdx b/docs/language-models/local-models/janai.mdx index 63f150ef72..215ed9c2f4 100644 --- a/docs/language-models/local-models/janai.mdx +++ b/docs/language-models/local-models/janai.mdx @@ -49,3 +49,8 @@ interpreter.context_window = 5000 ``` + + + If Jan is producing strange output, or no output at all, make sure to update + to the latest version and clean your cache. + diff --git a/docs/language-models/settings.mdx b/docs/language-models/settings.mdx index 873f41bb78..02968b6643 100644 --- a/docs/language-models/settings.mdx +++ b/docs/language-models/settings.mdx @@ -4,4 +4,4 @@ title: Settings The `interpreter.llm` is responsible for running the language model. -[Click here to view `interpreter.llm` settings.](/settings/all-settings#language-model) +[Click here](/settings/all-settings#language-model) to view `interpreter.llm` settings. diff --git a/docs/mint.json b/docs/mint.json index 177dcf1b3f..de357980da 100644 --- a/docs/mint.json +++ b/docs/mint.json @@ -97,10 +97,10 @@ { "group": "Code Execution", "pages": [ - "code-execution/settings", "code-execution/usage", "code-execution/computer-api", - "code-execution/custom-languages" + "code-execution/custom-languages", + "code-execution/settings" ] }, { diff --git a/docs/safety/introduction.mdx b/docs/safety/introduction.mdx index 749b30ad93..46dc09415b 100644 --- a/docs/safety/introduction.mdx +++ b/docs/safety/introduction.mdx @@ -14,4 +14,9 @@ Safety is a top priority for us at Open Interpreter. Running LLM generated code ## Notice -Open Interpreter is not responsible for any damage caused by using the package. These safety measures provide no guarantees of safety or security. Please be careful when running code generated by Open Interpreter, and make sure you understand what it will do before running it. + + Open Interpreter is not responsible for any damage caused by using the + package. These safety measures provide no guarantees of safety or security. + Please be careful when running code generated by Open Interpreter, and make + sure you understand what it will do before running it. + diff --git a/docs/settings/all-settings.mdx b/docs/settings/all-settings.mdx index a8aa9f66b4..7fe06b8c2b 100644 --- a/docs/settings/all-settings.mdx +++ b/docs/settings/all-settings.mdx @@ -393,7 +393,7 @@ interpreter --help -### Force Task Completion +### Loop (Force Task Completion) Runs Open Interpreter in a loop, requiring it to admit to completing or failing every task. @@ -622,8 +622,6 @@ This property holds a list of `messages` between the user and the interpreter. You can use it to restore a conversation: - - ```python interpreter.chat("Hi! Can you print hello world?") @@ -652,8 +650,6 @@ print(interpreter.messages) interpreter.messages = messages # A list that resembles the one above ``` - - ### User Message Template A template applied to the User's message. `{content}` will be replaced with the user's message, then sent to the language model. diff --git a/examples/README.md b/examples/README.md index de97eb14b5..47d87e5777 100644 --- a/examples/README.md +++ b/examples/README.md @@ -2,17 +2,18 @@ This directory contains various examples demonstrating how to use Open Interpreter in different scenarios and configurations. Each example is designed to provide a practical guide to integrating and leveraging Open Interpreter's capabilities in your projects. -## Overview - -- **Terminal Usage**: Examples of how to use Open Interpreter directly from your terminal. -- **Python Integration**: How to integrate Open Interpreter into your Python scripts for more complex workflows. -- **Custom Profiles**: Examples of using YAML files for setting default behaviors and configurations. - ## Colab Notebooks [Google Colab](https://colab.google/) provides a sandboxed development environment for you to run code in. Here are some Jupyter Notebooks on Colab that you can try: -Local 3: [![Local 3](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jWKKwVCQneCTB5VNQNWO0Wxqg1vG_E1T#scrollTo=13ISLtY9_v7g) -Interactive Demo: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing) +### Local 3 + +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jWKKwVCQneCTB5VNQNWO0Wxqg1vG_E1T#scrollTo=13ISLtY9_v7g) + +### Interactive Demo + +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing) + +### Voice Interface -Voice Interface: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK) +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK)