Skip to content

Commit

Permalink
Update logging in complex tasks (#1687)
Browse files Browse the repository at this point in the history
* Logging (#1146)

* WIP:logging

* serialize request, response and client

* Fixed code formatting.

* Updated to use a global package, and added some test cases. Still very-much a draft.

* Update work in progress.

* adding cost

* log new agent

* update log_completion test in test_agent_telemetry

* tests

* fix formatting

* Added additional telemetry for wrappers and clients.

* WIP: add test for oai client and oai wrapper table

* update test_telemetry

* fix format

* More tests, update doc and clean up

* small fix for session id - moved to start_logging and return from start_logging

* update start_logging type to return str, add notebook to demonstrate use of telemetry

* add ability to get log dataframe

* precommit formatting fixes

* formatting fix

* Remove pandas dependency from telemetry and only use in notebook

* formatting fixes

* log query exceptions

* fix formatting

* fix ci

* fix comment - add notebook link in doc and fix groupchat serialization

* small fix

* do not serialize Agent

* formatting

* wip

* fix test

* serialization bug fix for soc moderator

* fix test and clean up

* wip: add version table

* fix test

* fix test

* fix test

* make the logging interface more general and fix client model logging

* fix format

* fix formatting and tests

* fix

* fix comment

* Renaming telemetry to logging

* update notebook

* update doc

* formatting

* formatting and clean up

* fix doc

* fix link and title

* fix notebook format and fix comment

* format

* try fixing agent test and update migration guide

* fix link

* debug print

* debug

* format

* add back tests

* fix tests

---------

Co-authored-by: Adam Fourney <[email protected]>
Co-authored-by: Victor Dibia <[email protected]>
Co-authored-by: Chi Wang <[email protected]>

* Validate llm_config passed to ConversableAgent (issue #1522) (#1654)

* Validate llm_config passed to ConversableAgent

Based on #1522, this commit implements the additional validation checks in
`ConversableAgent.`

Add the following validation and `raise ValueError` if:

 - The `llm_config` is `None`.
 - The `llm_config` is valid, but `config_list` is missing or lacks elements.
 - The `config_list` is valid, but no `model` is specified.

The rest of the changes are code churn to adjust or add the test cases.

* Validate llm_config passed to ConversableAgent

Based on #1522, this commit implements the additional validation checks in
`ConversableAgent.`

Add the following validation and `raise ValueError` if:

 - The `llm_config` is `None` (validated in `ConversableAgent`).
 - The `llm_config` has no `model` specified and `config_list` is empty
   (validated in `OpenAIWrapper`).
 - The `config_list` has at least one entry, but not all the entries have
   the `model` is specified (validated in `OpenAIWrapper`).

The rest of the changes are code churn to adjust or add the test cases.

* Validate llm_config passed to ConversableAgent

Based on #1522, this commit implements the additional validation checks in
`ConversableAgent.`

Add the following validation and `raise ValueError` if:

 - The `llm_config` is `None` (validated in `ConversableAgent`).
 - The `llm_config` has no `model` specified and `config_list` is empty
   (validated in `OpenAIWrapper`).
 - The `config_list` has at least one entry, but not all the entries have
   the `model` is specified (validated in `OpenAIWrapper`).

The rest of the changes are code churn to adjust or add the test cases.

* Validate llm_config passed to ConversableAgent

Based on #1522, this commit implements the additional validation checks in
`ConversableAgent.`

Add the following validation and `raise ValueError` if:

 - The `llm_config` is `None` (validated in `ConversableAgent`).
 - The `llm_config` has no `model` specified and `config_list` is empty
   (validated in `OpenAIWrapper`).
 - The `config_list` has at least one entry, but not all the entries have
   the `model` is specified (validated in `OpenAIWrapper`).

The rest of the changes are code churn to adjust or add the test cases.

* Validate llm_config passed to ConversableAgent

Based on #1522, this commit implements the additional validation checks in
`ConversableAgent.`

Add the following validation and `raise ValueError` if:

 - The `llm_config` is `None` (validated in `ConversableAgent`).
 - The `llm_config` has no `model` specified and `config_list` is empty
   (validated in `OpenAIWrapper`).
 - The `config_list` has at least one entry, but not all the entries have
   the `model` is specified (validated in `OpenAIWrapper`).

The rest of the changes are code churn to adjust or add the test cases.

* Fix the test_web_surfer issue

For anyone reading this: you need to `pip install markdownify` for the
`import WebSurferAgent` to succeed. That is needed to run the
`test_web_surfer.py` locally.

Test logic needs `llm_config` that is not `None` and that is not
`False`.

Let us pray that this works as part of GitHub actions ...

* One more fix for llm_config validation contract

* update test_utils

* remove stale files

---------

Co-authored-by: Adam Fourney <[email protected]>
Co-authored-by: Victor Dibia <[email protected]>
Co-authored-by: Chi Wang <[email protected]>
Co-authored-by: Gunnar Kudrjavets <[email protected]>
  • Loading branch information
5 people authored Feb 15, 2024
1 parent 0d72715 commit 6bd3918
Show file tree
Hide file tree
Showing 20 changed files with 1,293 additions and 519 deletions.
7 changes: 4 additions & 3 deletions autogen/agentchat/assistant_agent.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from typing import Callable, Dict, Literal, Optional, Union

from .conversable_agent import ConversableAgent
from ..telemetry import log_new_agent
from autogen.runtime_logging import logging_enabled, log_new_agent


class AssistantAgent(ConversableAgent):
Expand Down Expand Up @@ -46,7 +46,7 @@ def __init__(
name (str): agent name.
system_message (str): system message for the ChatCompletion inference.
Please override this attribute if you want to reprogram the agent.
llm_config (dict): llm inference configuration.
llm_config (dict or False or None): llm inference configuration.
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
is_termination_msg (function): a function that takes a message in the form of a dictionary
Expand All @@ -68,7 +68,8 @@ def __init__(
description=description,
**kwargs,
)
log_new_agent(self, locals())
if logging_enabled():
log_new_agent(self, locals())

# Update the provided description if None, and we are using the default system_message,
# then use the default description.
Expand Down
16 changes: 11 additions & 5 deletions autogen/agentchat/conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
from ..coding.factory import CodeExecutorFactory

from ..oai.client import OpenAIWrapper, ModelClient
from ..telemetry import log_new_agent
from ..runtime_logging import logging_enabled, log_new_agent
from ..cache.cache import Cache
from ..code_utils import (
UNKNOWN,
Expand Down Expand Up @@ -80,7 +80,7 @@ def __init__(
function_map: Optional[Dict[str, Callable]] = None,
code_execution_config: Union[Dict, Literal[False]] = False,
llm_config: Optional[Union[Dict, Literal[False]]] = None,
default_auto_reply: Optional[Union[str, Dict, None]] = "",
default_auto_reply: Union[str, Dict] = "",
description: Optional[str] = None,
):
"""
Expand Down Expand Up @@ -118,11 +118,11 @@ def __init__(
- timeout (Optional, int): The maximum execution time in seconds.
- last_n_messages (Experimental, int or str): The number of messages to look back for code execution.
If set to 'auto', it will scan backwards through all messages arriving since the agent last spoke, which is typically the last time execution was attempted. (Default: auto)
llm_config (dict or False): llm inference configuration.
llm_config (dict or False or None): llm inference configuration.
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
To disable llm-based auto reply, set to False.
default_auto_reply (str or dict or None): default auto reply when no code execution or llm-based reply is generated.
default_auto_reply (str or dict): default auto reply when no code execution or llm-based reply is generated.
description (str): a short description of the agent. This description is used by other agents
(e.g. the GroupChatManager) to decide when to call upon this agent. (Default: system_message)
"""
Expand All @@ -144,9 +144,15 @@ def __init__(
self.llm_config = self.DEFAULT_CONFIG.copy()
if isinstance(llm_config, dict):
self.llm_config.update(llm_config)
# We still have a default `llm_config` because the user didn't
# specify anything. This won't work, so raise an error to avoid
# an obscure message from the OpenAI service.
if self.llm_config == {}:
raise ValueError("Please specify the value for 'llm_config'.")
self.client = OpenAIWrapper(**self.llm_config)

log_new_agent(self, locals())
if logging_enabled():
log_new_agent(self, locals())

# Initialize standalone client cache object.
self.client_cache = None
Expand Down
5 changes: 3 additions & 2 deletions autogen/agentchat/groupchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from ..code_utils import content_str
from .agent import Agent
from .conversable_agent import ConversableAgent
from ..telemetry import log_new_agent
from ..runtime_logging import logging_enabled, log_new_agent
from ..graph_utils import check_graph_validity, invert_disallowed_to_allowed

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -474,7 +474,8 @@ def __init__(
system_message=system_message,
**kwargs,
)
log_new_agent(self, locals())
if logging_enabled():
log_new_agent(self, locals())
# Store groupchat
self._groupchat = groupchat

Expand Down
6 changes: 4 additions & 2 deletions autogen/agentchat/user_proxy_agent.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from typing import Callable, Dict, List, Literal, Optional, Union

from .conversable_agent import ConversableAgent
from ..telemetry import log_new_agent
from ..runtime_logging import logging_enabled, log_new_agent


class UserProxyAgent(ConversableAgent):
Expand Down Expand Up @@ -94,4 +94,6 @@ def __init__(
if description is not None
else self.DEFAULT_USER_PROXY_AGENT_DESCRIPTIONS[human_input_mode],
)
log_new_agent(self, locals())

if logging_enabled():
log_new_agent(self, locals())
4 changes: 4 additions & 0 deletions autogen/logger/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from .logger_factory import LoggerFactory
from .sqlite_logger import SqliteLogger

__all__ = ("LoggerFactory", "SqliteLogger")
101 changes: 101 additions & 0 deletions autogen/logger/base_logger.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
from __future__ import annotations

from abc import ABC, abstractmethod
from typing import Dict, TYPE_CHECKING, Union
import sqlite3
import uuid

from openai import OpenAI, AzureOpenAI
from openai.types.chat import ChatCompletion

if TYPE_CHECKING:
from autogen import ConversableAgent, OpenAIWrapper


class BaseLogger(ABC):
@abstractmethod
def start(self) -> str:
"""
Open a connection to the logging database, and start recording.
Returns:
session_id (str): a unique id for the logging session
"""
...

@abstractmethod
def log_chat_completion(
invocation_id: uuid.UUID,
client_id: int,
wrapper_id: int,
request: Dict,
response: Union[str, ChatCompletion],
is_cached: int,
cost: float,
start_time: str,
) -> None:
"""
Log a chat completion to database.
In AutoGen, chat completions are somewhat complicated because they are handled by the `autogen.oai.OpenAIWrapper` class.
One invocation to `create` can lead to multiple underlying OpenAI calls, depending on the llm_config list used, and
any errors or retries.
Args:
invocation_id (uuid): A unique identifier for the invocation to the OpenAIWrapper.create method call
client_id (int): A unique identifier for the underlying OpenAI client instance
wrapper_id (int): A unique identifier for the OpenAIWrapper instance
request (dict): A dictionary representing the the request or call to the OpenAI client endpoint
response (str or ChatCompletion): The response from OpenAI
is_chached (int): 1 if the response was a cache hit, 0 otherwise
cost(float): The cost for OpenAI response
start_time (str): A string representing the moment the request was initiated
"""
...

@abstractmethod
def log_new_agent(agent: ConversableAgent, init_args: Dict) -> None:
"""
Log the birth of a new agent.
Args:
agent (ConversableAgent): The agent to log.
init_args (dict): The arguments passed to the construct the conversable agent
"""
...

@abstractmethod
def log_new_wrapper(wrapper: OpenAIWrapper, init_args: Dict) -> None:
"""
Log the birth of a new OpenAIWrapper.
Args:
wrapper (OpenAIWrapper): The wrapper to log.
init_args (dict): The arguments passed to the construct the wrapper
"""
...

@abstractmethod
def log_new_client(client: Union[AzureOpenAI, OpenAI], wrapper: OpenAIWrapper, init_args: Dict) -> None:
"""
Log the birth of a new OpenAIWrapper.
Args:
wrapper (OpenAI): The OpenAI client to log.
init_args (dict): The arguments passed to the construct the client
"""
...

@abstractmethod
def stop() -> None:
"""
Close the connection to the logging database, and stop logging.
"""
...

@abstractmethod
def get_connection() -> Union[sqlite3.Connection]:
"""
Return a connection to the logging database.
"""
...
17 changes: 17 additions & 0 deletions autogen/logger/logger_factory.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from typing import Any, Dict, Optional
from autogen.logger.base_logger import BaseLogger
from autogen.logger.sqlite_logger import SqliteLogger

__all__ = ("LoggerFactory",)


class LoggerFactory:
@staticmethod
def get_logger(logger_type: str = "sqlite", config: Optional[Dict[str, Any]] = None) -> BaseLogger:
if config is None:
config = {}

if logger_type == "sqlite":
return SqliteLogger(config)
else:
raise ValueError(f"[logger_factory] Unknown logger type: {logger_type}")
36 changes: 36 additions & 0 deletions autogen/logger/logger_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import datetime
import inspect
from typing import Any, Dict, List, Tuple, Union

__all__ = ("get_current_ts", "to_dict")


def get_current_ts():
return datetime.datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S.%f")


def to_dict(
obj: Union[int, float, str, bool, Dict[Any, Any], List[Any], Tuple[Any, ...], Any],
exclude: Tuple[str] = (),
no_recursive: Tuple[str] = (),
) -> Any:
if isinstance(obj, (int, float, str, bool)):
return obj
elif callable(obj):
return inspect.getsource(obj).strip()
elif isinstance(obj, dict):
return {
str(k): to_dict(str(v)) if isinstance(v, no_recursive) else to_dict(v, exclude, no_recursive)
for k, v in obj.items()
if k not in exclude
}
elif isinstance(obj, (list, tuple)):
return [to_dict(str(v)) if isinstance(v, no_recursive) else to_dict(v, exclude, no_recursive) for v in obj]
elif hasattr(obj, "__dict__"):
return {
str(k): to_dict(str(v)) if isinstance(v, no_recursive) else to_dict(v, exclude, no_recursive)
for k, v in vars(obj).items()
if k not in exclude
}
else:
return obj
Loading

0 comments on commit 6bd3918

Please sign in to comment.