Skip to content

Commit

Permalink
Add codespell to pre-commit hooks and fix spelling of existing files (#…
Browse files Browse the repository at this point in the history
…1161)

* fixed spelling, minor errors and reformatted using black

* polishing

* added codespell to pre-commit hooks, fixed a number of spelling errors and a few minor bugs in the code

* update autogen library version in notebooks

* update autogen library version in notebooks

* update autogen library version in notebooks

* update autogen library version in notebooks

* update autogen library version in notebooks
  • Loading branch information
davorrunje authored Jan 7, 2024
1 parent 295b835 commit 8f065e0
Show file tree
Hide file tree
Showing 41 changed files with 4,094 additions and 1,054 deletions.
12 changes: 12 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,15 @@ repos:
hooks:
- id: ruff
args: ["--fix"]
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
args: ["-L", "ans,linar,nam,"]
exclude: |
(?x)^(
pyproject.toml |
website/static/img/ag.svg |
website/yarn.lock |
notebook/.*
)$
6 changes: 3 additions & 3 deletions autogen/agentchat/conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -694,7 +694,7 @@ def generate_oai_reply(
tool_responses = message.get("tool_responses", [])
if tool_responses:
all_messages += tool_responses
# tool role on the parent message means the content is just concatentation of all of the tool_responses
# tool role on the parent message means the content is just concatenation of all of the tool_responses
if message.get("role") != "tool":
all_messages.append({key: message[key] for key in message if key != "tool_responses"})
else:
Expand Down Expand Up @@ -1682,7 +1682,7 @@ def _decorator(func: F) -> F:
RuntimeError: if the LLM config is not set up before registering a function.
"""
# name can be overwriten by the parameter, by default it is the same as function name
# name can be overwritten by the parameter, by default it is the same as function name
if name:
func._name = name
elif not hasattr(func, "_name"):
Expand Down Expand Up @@ -1746,7 +1746,7 @@ def _decorator(func: F) -> F:
ValueError: if the function description is not provided and not propagated by a previous decorator.
"""
# name can be overwriten by the parameter, by default it is the same as function name
# name can be overwritten by the parameter, by default it is the same as function name
if name:
func._name = name
elif not hasattr(func, "_name"):
Expand Down
2 changes: 1 addition & 1 deletion autogen/oai/openai_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ def config_list_from_models(
"""
Get a list of configs for API calls with models specified in the model list.
This function extends `config_list_openai_aoai` by allowing to clone its' out for each fof the models provided.
This function extends `config_list_openai_aoai` by allowing to clone its' out for each of the models provided.
Each configuration will have a 'model' key with the model name as its value. This is particularly useful when
all endpoints have same set of models.
Expand Down
457 changes: 243 additions & 214 deletions notebook/Async_human_input.ipynb

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions notebook/agentchat_MathChat.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@
"source": [
"## Construct agents for MathChat\n",
"\n",
"We start by initialzing the `AssistantAgent` and `MathUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for MathChat. The detailed instructions are given in the user message. Later we will use the `MathUserProxyAgent.generate_init_message` to combine the instructions and a math problem for an initial message to be sent to the LLM assistant."
"We start by initializing the `AssistantAgent` and `MathUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for MathChat. The detailed instructions are given in the user message. Later we will use the `MathUserProxyAgent.generate_init_message` to combine the instructions and a math problem for an initial message to be sent to the LLM assistant."
]
},
{
Expand Down Expand Up @@ -266,7 +266,7 @@
"metadata": {},
"outputs": [],
"source": [
"# we set the prompt_type to \"python\", which is a simplied version of the default prompt.\n",
"# we set the prompt_type to \"python\", which is a simplified version of the default prompt.\n",
"math_problem = \"Problem: If $725x + 727y = 1500$ and $729x+ 731y = 1508$, what is the value of $x - y$ ?\"\n",
"mathproxyagent.initiate_chat(assistant, problem=math_problem, prompt_type=\"python\")"
]
Expand Down
20 changes: 10 additions & 10 deletions notebook/agentchat_RetrieveChat.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
"\n",
"AutoGen requires `Python>=3.8`. To run this notebook example, please install the [retrievechat] option.\n",
"```bash\n",
"pip install \"pyautogen[retrievechat]~=0.2.0b5\" \"flaml[automl]\"\n",
"pip install \"pyautogen[retrievechat]>=0.2.3\" \"flaml[automl]\"\n",
"```"
]
},
Expand All @@ -52,7 +52,7 @@
"metadata": {},
"outputs": [],
"source": [
"# %pip install \"pyautogen[retrievechat]~=0.2.0b5\" \"flaml[automl]\""
"# %pip install \"pyautogen[retrievechat]>=0.2.3\" \"flaml[automl]\""
]
},
{
Expand Down Expand Up @@ -143,7 +143,7 @@
"source": [
"## Construct agents for RetrieveChat\n",
"\n",
"We start by initialzing the `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for RetrieveAssistantAgent. The detailed instructions are given in the user message. Later we will use the `RetrieveUserProxyAgent.generate_init_prompt` to combine the instructions and a retrieval augmented generation task for an initial prompt to be sent to the LLM assistant."
"We start by initializing the `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for RetrieveAssistantAgent. The detailed instructions are given in the user message. Later we will use the `RetrieveUserProxyAgent.generate_init_prompt` to combine the instructions and a retrieval augmented generation task for an initial prompt to be sent to the LLM assistant."
]
},
{
Expand Down Expand Up @@ -198,7 +198,7 @@
"# `task` indicates the kind of task we're working on. In this example, it's a `code` task.\n",
"# `chunk_token_size` is the chunk token size for the retrieve chat. By default, it is set to `max_tokens * 0.6`, here we set it to 2000.\n",
"# `custom_text_types` is a list of file types to be processed. Default is `autogen.retrieve_utils.TEXT_FORMATS`.\n",
"# This only applies to files under the directories in `docs_path`. Explictly included files and urls will be chunked regardless of their types.\n",
"# This only applies to files under the directories in `docs_path`. Explicitly included files and urls will be chunked regardless of their types.\n",
"# In this example, we set it to [\"mdx\"] to only process markdown files. Since no mdx files are included in the `websit/docs`,\n",
"# no files there will be processed. However, the explicitly included urls will still be processed.\n",
"ragproxyagent = RetrieveUserProxyAgent(\n",
Expand Down Expand Up @@ -377,7 +377,7 @@
"\n",
"\n",
"- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performs parallel tuning.\n",
"- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n",
"\n",
"An example code snippet for using parallel Spark jobs:\n",
Expand Down Expand Up @@ -669,7 +669,7 @@
"\n",
"\n",
"- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performs parallel tuning.\n",
"- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n",
"\n",
"An example code snippet for using parallel Spark jobs:\n",
Expand Down Expand Up @@ -922,7 +922,7 @@
"\n",
"\n",
"- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performs parallel tuning.\n",
"- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n",
"\n",
"An example code snippet for using parallel Spark jobs:\n",
Expand Down Expand Up @@ -1224,7 +1224,7 @@
"\n",
"\n",
"- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performs parallel tuning.\n",
"- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n",
"\n",
"An example code snippet for using parallel Spark jobs:\n",
Expand Down Expand Up @@ -1638,7 +1638,7 @@
"\n",
"\n",
"- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performs parallel tuning.\n",
"- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n",
"\n",
"An example code snippet for using parallel Spark jobs:\n",
Expand Down Expand Up @@ -1891,7 +1891,7 @@
"\n",
"\n",
"- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n",
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performs parallel tuning.\n",
"- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n",
"\n",
"An example code snippet for using parallel Spark jobs:\n",
Expand Down
2 changes: 1 addition & 1 deletion notebook/agentchat_auto_feedback_from_code_execution.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
},
"outputs": [],
"source": [
"# %pip install pyautogen~=0.2.0b4"
"# %pip install pyautogen>=0.2.3"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion notebook/agentchat_chess.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"# %pip install \"pyautogen~=0.2.0b4\"\n",
"# %pip install \"pyautogen>=0.2.3\"\n",
"%pip install chess -U"
]
},
Expand Down
4 changes: 2 additions & 2 deletions notebook/agentchat_compression.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -549,7 +549,7 @@
" compress_config={\n",
" \"mode\": \"COMPRESS\",\n",
" \"trigger_count\": 600, # set this to a large number for less frequent compression\n",
" \"verbose\": True, # to allow printing of compression information: contex before and after compression\n",
" \"verbose\": True, # to allow printing of compression information: context before and after compression\n",
" \"leave_last_n\": 2,\n",
" }\n",
")\n",
Expand Down Expand Up @@ -835,7 +835,7 @@
" code_execution_config={\"work_dir\": \"coding\"},\n",
")\n",
"\n",
"# define functions according to the function desription\n",
"# define functions according to the function description\n",
"from IPython import get_ipython\n",
"\n",
"def exec_python(cell):\n",
Expand Down
8 changes: 4 additions & 4 deletions notebook/agentchat_dalle_and_gpt4v.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"source": [
"### Before everything starts, install AutoGen with the `lmm` option\n",
"```bash\n",
"pip install \"pyautogen[lmm]~=0.2.0b4\"\n",
"pip install \"pyautogen[lmm]>=0.2.3\"\n",
"```"
]
},
Expand Down Expand Up @@ -401,7 +401,7 @@
"How to create a figure that is better in terms of color, shape, text (clarity), and other things.\n",
"Reply with the following format:\n",
"\n",
"CIRITICS: the image needs to improve...\n",
"CRITICS: the image needs to improve...\n",
"PROMPT: here is the updated prompt!\n",
"\n",
"\"\"\",\n",
Expand Down Expand Up @@ -432,7 +432,7 @@
" self.msg_to_critics = f\"\"\"Here is the prompt: {img_prompt}.\n",
" Here is the figure <img result.png>.\n",
" Now, critic and create a prompt so that DALLE can give me a better image.\n",
" Show me both \"CIRITICS\" and \"PROMPT\"!\n",
" Show me both \"CRITICS\" and \"PROMPT\"!\n",
" \"\"\"\n",
" self.send(message=self.msg_to_critics,\n",
" recipient=self.critics,\n",
Expand Down Expand Up @@ -615,7 +615,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.13"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion notebook/agentchat_function_call.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
"metadata": {},
"outputs": [],
"source": [
"# %pip install \"pyautogen~=0.2.2\""
"# %pip install \"pyautogen>=0.2.3\""
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions notebook/agentchat_function_call_currency_calculator.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
"metadata": {},
"outputs": [],
"source": [
"# %pip install \"pyautogen~=0.2.2\""
"# %pip install \"pyautogen>=0.2.3\""
]
},
{
Expand Down Expand Up @@ -222,7 +222,7 @@
"\n",
"- objects of the Pydantic BaseModel type are serialized to JSON.\n",
"\n",
"We can check the correctness of of function map by using `._origin` property of the wrapped funtion as follows:"
"We can check the correctness of of function map by using `._origin` property of the wrapped function as follows:"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,26 +33,26 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"# %pip install pyautogen~=0.2.0b6\n",
"# %pip install \"pyautogen>=0.2.3\"\n",
"%pip install networkX~=3.2.1\n",
"%pip install matplotlib~=3.8.1"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.2.0b6\n"
"0.2.3\n"
]
}
],
Expand Down Expand Up @@ -471,7 +471,7 @@
" \n",
" If you are the team leader, you should aggregate your team's total chocolate count to cooperate.\n",
" Once the team leader know their team's tally, they can suggest another team leader for them to find their team tally, because we need all three team tallys to succeed.\n",
" Use NEXT: to sugest the next speaker, e.g., NEXT: A0.\n",
" Use NEXT: to suggest the next speaker, e.g., NEXT: A0.\n",
" \n",
" Once we have the total tally from all nine players, sum up all three teams' tally, then terminate the discussion using TERMINATE.\n",
" \n",
Expand Down Expand Up @@ -777,7 +777,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.13"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion notebook/agentchat_groupchat.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"# %pip install pyautogen~=0.2.0b4"
"# %pip install \"pyautogen>=0.2.3\""
]
},
{
Expand Down
Loading

0 comments on commit 8f065e0

Please sign in to comment.