Skip to content

Commit c03558f

Browse files
committed
Merge remote-tracking branch 'origin/metaagent' into metaagent
2 parents c1e8bf3 + 2b3a9ae commit c03558f

File tree

90 files changed

+3950
-589
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

90 files changed

+3950
-589
lines changed

.github/workflows/contrib-tests.yml

+36
Original file line numberDiff line numberDiff line change
@@ -638,3 +638,39 @@ jobs:
638638
with:
639639
file: ./coverage.xml
640640
flags: unittests
641+
642+
CohereTest:
643+
runs-on: ${{ matrix.os }}
644+
strategy:
645+
matrix:
646+
os: [ubuntu-latest, macos-latest, windows-latest]
647+
python-version: ["3.9", "3.10", "3.11", "3.12"]
648+
steps:
649+
- uses: actions/checkout@v4
650+
with:
651+
lfs: true
652+
- name: Set up Python ${{ matrix.python-version }}
653+
uses: actions/setup-python@v5
654+
with:
655+
python-version: ${{ matrix.python-version }}
656+
- name: Install packages and dependencies for all tests
657+
run: |
658+
python -m pip install --upgrade pip wheel
659+
pip install pytest-cov>=5
660+
- name: Install packages and dependencies for Cohere
661+
run: |
662+
pip install -e .[cohere,test]
663+
- name: Set AUTOGEN_USE_DOCKER based on OS
664+
shell: bash
665+
run: |
666+
if [[ ${{ matrix.os }} != ubuntu-latest ]]; then
667+
echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV
668+
fi
669+
- name: Coverage
670+
run: |
671+
pytest test/oai/test_cohere.py --skip-openai
672+
- name: Upload coverage to Codecov
673+
uses: codecov/codecov-action@v3
674+
with:
675+
file: ./coverage.xml
676+
flags: unittests

README.md

+6-1
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,12 @@
6666

6767
## What is AutoGen
6868

69-
AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
69+
AutoGen is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AutoGen aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns.
70+
71+
**Open Source Statement**: The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. Together, we can build something truly remarkable.
72+
73+
The project is currently maintained by a [dynamic group of volunteers](https://butternut-swordtail-8a5.notion.site/410675be605442d3ada9a42eb4dfef30?v=fa5d0a79fd3d4c0f9c112951b2831cbb&pvs=4) from several different organizations. Contact project administrators Chi Wang and Qingyun Wu via [email protected] if you are interested in becoming a maintainer.
74+
7075

7176
![AutoGen Overview](https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png)
7277

Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
1-
Agents for running the AgentEval pipeline.
1+
Agents for running the [AgentEval](https://microsoft.github.io/autogen/blog/2023/11/20/AgentEval/) pipeline.
22

33
AgentEval is a process for evaluating a LLM-based system's performance on a given task.
44

55
When given a task to evaluate and a few example runs, the critic and subcritic agents create evaluation criteria for evaluating a system's solution. Once the criteria has been created, the quantifier agent can evaluate subsequent task solutions based on the generated criteria.
66

77
For more information see: [AgentEval Integration Roadmap](https://github.com/microsoft/autogen/issues/2162)
8+
9+
See our [blog post](https://microsoft.github.io/autogen/blog/2024/06/21/AgentEval) for usage examples and general explanations.

autogen/agentchat/contrib/llamaindex_conversable_agent.py

+1-2
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,14 @@
88

99
try:
1010
from llama_index.core.agent.runner.base import AgentRunner
11+
from llama_index.core.base.llms.types import ChatMessage
1112
from llama_index.core.chat_engine.types import AgentChatResponse
12-
from llama_index_client import ChatMessage
1313
except ImportError as e:
1414
logger.fatal("Failed to import llama-index. Try running 'pip install llama-index'")
1515
raise e
1616

1717

1818
class LLamaIndexConversableAgent(ConversableAgent):
19-
2019
def __init__(
2120
self,
2221
name: str,

autogen/logger/file_logger.py

+11-1
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@
1818
if TYPE_CHECKING:
1919
from autogen import Agent, ConversableAgent, OpenAIWrapper
2020
from autogen.oai.anthropic import AnthropicClient
21+
from autogen.oai.cohere import CohereClient
2122
from autogen.oai.gemini import GeminiClient
2223
from autogen.oai.groq import GroqClient
2324
from autogen.oai.mistral import MistralAIClient
@@ -205,7 +206,16 @@ def log_new_wrapper(
205206

206207
def log_new_client(
207208
self,
208-
client: AzureOpenAI | OpenAI | GeminiClient | AnthropicClient | MistralAIClient | TogetherClient | GroqClient,
209+
client: (
210+
AzureOpenAI
211+
| OpenAI
212+
| GeminiClient
213+
| AnthropicClient
214+
| MistralAIClient
215+
| TogetherClient
216+
| GroqClient
217+
| CohereClient
218+
),
209219
wrapper: OpenAIWrapper,
210220
init_args: Dict[str, Any],
211221
) -> None:

autogen/logger/sqlite_logger.py

+11-1
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@
1919
if TYPE_CHECKING:
2020
from autogen import Agent, ConversableAgent, OpenAIWrapper
2121
from autogen.oai.anthropic import AnthropicClient
22+
from autogen.oai.cohere import CohereClient
2223
from autogen.oai.gemini import GeminiClient
2324
from autogen.oai.groq import GroqClient
2425
from autogen.oai.mistral import MistralAIClient
@@ -392,7 +393,16 @@ def log_function_use(self, source: Union[str, Agent], function: F, args: Dict[st
392393

393394
def log_new_client(
394395
self,
395-
client: Union[AzureOpenAI, OpenAI, GeminiClient, AnthropicClient, MistralAIClient, TogetherClient, GroqClient],
396+
client: Union[
397+
AzureOpenAI,
398+
OpenAI,
399+
GeminiClient,
400+
AnthropicClient,
401+
MistralAIClient,
402+
TogetherClient,
403+
GroqClient,
404+
CohereClient,
405+
],
396406
wrapper: OpenAIWrapper,
397407
init_args: Dict[str, Any],
398408
) -> None:

autogen/oai/client.py

+12
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,13 @@
7777
except ImportError as e:
7878
groq_import_exception = e
7979

80+
try:
81+
from autogen.oai.cohere import CohereClient
82+
83+
cohere_import_exception: Optional[ImportError] = None
84+
except ImportError as e:
85+
cohere_import_exception = e
86+
8087
logger = logging.getLogger(__name__)
8188
if not logger.handlers:
8289
# Add the console handler.
@@ -497,6 +504,11 @@ def _register_default_client(self, config: Dict[str, Any], openai_config: Dict[s
497504
raise ImportError("Please install `groq` to use the Groq API.")
498505
client = GroqClient(**openai_config)
499506
self._clients.append(client)
507+
elif api_type is not None and api_type.startswith("cohere"):
508+
if cohere_import_exception:
509+
raise ImportError("Please install `cohere` to use the Groq API.")
510+
client = CohereClient(**openai_config)
511+
self._clients.append(client)
500512
else:
501513
client = OpenAI(**openai_config)
502514
self._clients.append(OpenAIClient(client))

0 commit comments

Comments
 (0)