Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Interface in AgentChat #4438

Open
wants to merge 22 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
48d7ecb
initial base memroy impl
victordibia Nov 30, 2024
f70f61e
update, add example with chromadb
victordibia Dec 1, 2024
24fa684
include mimetype consideration
victordibia Dec 1, 2024
9e94ec8
Merge remote-tracking branch 'origin/main' into agentchat_memory_vd
victordibia Dec 19, 2024
0b7469e
add transform method
victordibia Dec 20, 2024
138ee05
update to address feedback, will update after 4681 is merged
victordibia Dec 20, 2024
a94634b
Merge remote-tracking branch 'origin/main' into agentchat_memory_vd
victordibia Dec 20, 2024
675924c
update memory impl,
victordibia Dec 25, 2024
b1da7e2
remove chroma db, typing fixes
victordibia Jan 3, 2025
f0812a3
Merge remote-tracking branch 'origin/main' into agentchat_memory_vd
victordibia Jan 3, 2025
32701db
format, add test
victordibia Jan 4, 2025
d7bf4d2
update uv lock
victordibia Jan 4, 2025
afbef4d
update docs
victordibia Jan 4, 2025
003bb2e
format updates
victordibia Jan 4, 2025
7b15c2e
update notebook
victordibia Jan 4, 2025
b353110
add memoryqueryevent message, yield message for observability.
victordibia Jan 4, 2025
e1a9be2
Merge branch 'main' into agentchat_memory_vd
victordibia Jan 4, 2025
c797f6a
minor fixes, make score optional/none
victordibia Jan 4, 2025
dfb1da6
Merge branch 'agentchat_memory_vd' of github.com:microsoft/autogen in…
victordibia Jan 4, 2025
97ed7f5
Update python/packages/autogen-agentchat/src/autogen_agentchat/agents…
victordibia Jan 6, 2025
5a74fdf
Merge branch 'main' into agentchat_memory_vd
victordibia Jan 6, 2025
24bd81e
update tests to improve cov
victordibia Jan 7, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,12 @@
from .. import EVENT_LOGGER_NAME
from ..base import Handoff as HandoffBase
from ..base import Response
from ..memory._base_memory import Memory
from ..messages import (
AgentEvent,
ChatMessage,
HandoffMessage,
MemoryQueryEvent,
MultiModalMessage,
TextMessage,
ToolCallExecutionEvent,
Expand Down Expand Up @@ -133,6 +135,7 @@
will be returned as the response.
Available variables: `{tool_name}`, `{arguments}`, `{result}`.
For example, `"{tool_name}: {result}"` will create a summary like `"tool_name: result"`.
memory (Sequence[Memory] | None, optional): The memory store to use for the agent. Defaults to `None`.

Raises:
ValueError: If tool names are not unique.
Expand Down Expand Up @@ -253,9 +256,22 @@
) = "You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.",
reflect_on_tool_use: bool = False,
tool_call_summary_format: str = "{result}",
memory: Sequence[Memory] | None = None,
):
super().__init__(name=name, description=description)
self._model_client = model_client
self._memory = None
if memory is not None:
if isinstance(memory, Memory):
self._memory = [memory]

Check warning on line 266 in python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py#L266

Added line #L266 was not covered by tests
elif isinstance(memory, list):
self._memory = memory
else:
raise TypeError(f"Expected Memory, List[Memory], or None, got {type(memory)}")

self._system_messages: List[
SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage
] = []
if system_message is None:
self._system_messages = []
else:
Expand Down Expand Up @@ -338,6 +354,15 @@
# Inner messages.
inner_messages: List[AgentEvent | ChatMessage] = []

# Update the model context with memory content.
if self._memory:
for memory in self._memory:
memory_query_result = await memory.transform(self._model_context)
if memory_query_result and len(memory_query_result) > 0:
memory_query_event_msg = MemoryQueryEvent(content=memory_query_result, source=self.name)
inner_messages.append(memory_query_event_msg)
yield memory_query_event_msg

# Generate an inference result based on the current model context.
llm_messages = self._system_messages + await self._model_context.get_messages()
result = await self._model_client.create(
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
from ._base_memory import Memory, MemoryContent, MemoryMimeType
from ._list_memory import ListMemory, ListMemoryConfig

__all__ = [
"Memory",
"MemoryContent",
"MemoryMimeType",
"ListMemory",
"ListMemoryConfig",
]
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
from datetime import datetime
from enum import Enum
from typing import Any, Dict, List, Protocol, Union, runtime_checkable

from autogen_core import CancellationToken, Image
from autogen_core.model_context import ChatCompletionContext
from pydantic import BaseModel, ConfigDict, Field


class MemoryMimeType(Enum):
"""Supported MIME types for memory content."""

TEXT = "text/plain"
JSON = "application/json"
MARKDOWN = "text/markdown"
IMAGE = "image/*"
BINARY = "application/octet-stream"


ContentType = Union[str, bytes, Dict[str, Any], Image]


class MemoryContent(BaseModel):
content: ContentType
mime_type: MemoryMimeType | str
metadata: Dict[str, Any] | None = None
timestamp: datetime | None = None
source: str | None = None
score: float | None = None

model_config = ConfigDict(arbitrary_types_allowed=True)


class BaseMemoryConfig(BaseModel):
"""Base configuration for memory implementations."""

k: int = Field(default=5, description="Number of results to return")
score_threshold: float | None = Field(default=None, description="Minimum relevance score")

model_config = ConfigDict(arbitrary_types_allowed=True)


@runtime_checkable
class Memory(Protocol):
"""Protocol defining the interface for memory implementations."""

@property
def name(self) -> str | None:
"""The name of this memory implementation."""
...

Check warning on line 50 in python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py#L50

Added line #L50 was not covered by tests

@property
def config(self) -> BaseMemoryConfig:
"""The configuration for this memory implementation."""
...

Check warning on line 55 in python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py#L55

Added line #L55 was not covered by tests

async def transform(
self,
model_context: ChatCompletionContext,
) -> List[MemoryContent]:
"""
Transform the provided model context using relevant memory content.

Args:
model_context: The context to transform

Returns:
List of memory entries with relevance scores
"""
...

Check warning on line 70 in python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py#L70

Added line #L70 was not covered by tests

async def query(
self,
query: MemoryContent,
cancellation_token: "CancellationToken | None" = None,
**kwargs: Any,
) -> List[MemoryContent]:
"""
Query the memory store and return relevant entries.

Args:
query: Query content item
cancellation_token: Optional token to cancel operation
**kwargs: Additional implementation-specific parameters

Returns:
List of memory entries with relevance scores
"""
...

Check warning on line 89 in python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py#L89

Added line #L89 was not covered by tests

async def add(self, content: MemoryContent, cancellation_token: "CancellationToken | None" = None) -> None:
"""
Add a new content to memory.

Args:
content: The memory content to add
cancellation_token: Optional token to cancel operation
"""
...

Check warning on line 99 in python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py#L99

Added line #L99 was not covered by tests

async def clear(self) -> None:
"""Clear all entries from memory."""
...

Check warning on line 103 in python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py#L103

Added line #L103 was not covered by tests

async def cleanup(self) -> None:
"""Clean up any resources used by the memory implementation."""
...

Check warning on line 107 in python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py

View check run for this annotation

Codecov / codecov/patch

python/packages/autogen-agentchat/src/autogen_agentchat/memory/_base_memory.py#L107

Added line #L107 was not covered by tests
Loading
Loading