Skip to content

Commit

Permalink
Docs updates (#2229)
Browse files Browse the repository at this point in the history
  • Loading branch information
zinyando authored Feb 19, 2025
1 parent d4df9f6 commit db51295
Show file tree
Hide file tree
Showing 4 changed files with 131 additions and 44 deletions.
60 changes: 60 additions & 0 deletions docs/core-concepts/memory-operations.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
title: Memory Operations
description: Understanding the core operations for managing memories in AI applications
icon: "gear"
iconType: "solid"
---

Mem0 provides two core operations for managing memories in AI applications: adding new memories and searching existing ones. This guide covers how these operations work and how to use them effectively in your application.


## Core Operations

Mem0 exposes two main endpoints for interacting with memories:
- The `add` endpoint for ingesting conversations and storing them as memories
- The `search` endpoint for retrieving relevant memories based on queries

### Adding Memories

<Frame caption="Architecture diagram illustrating the process of adding memories.">
<img src="../images/add_architecture.png" />
</Frame>

The add operation processes conversations through several steps:

1. **Information Extraction**
* An LLM extracts relevant memories from the conversation
* It identifies important entities and their relationships

2. **Conflict Resolution**
* The system compares new information with existing data
* It identifies and resolves any contradictions

3. **Memory Storage**
* Vector database stores the actual memories
* Graph database maintains relationship information
* Information is continuously updated with each interaction

### Searching Memories

<Frame caption="Architecture diagram illustrating the memory search process.">
<img src="../images/search_architecture.png" />
</Frame>

The search operation retrieves memories through a multi-step process:

1. **Query Processing**
* LLM processes and optimizes the search query
* System prepares filters for targeted search

2. **Vector Search**
* Performs semantic search using the optimized query
* Ranks results by relevance to the query
* Applies specified filters (user, agent, metadata, etc.)

3. **Result Processing**
* Combines and ranks the search results
* Returns memories with relevance scores
* Includes associated metadata and timestamps

This semantic search approach ensures accurate memory retrieval, whether you're looking for specific information or exploring related concepts.
48 changes: 48 additions & 0 deletions docs/core-concepts/memory-types.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
title: Memory Types
description: Understanding different types of memory in AI Applications
icon: "memory"
iconType: "solid"
---
To build useful AI applications, we need to understand how different memory systems work together. This guide explores the fundamental types of memory in AI systems and shows how Mem0 implements these concepts.

## Why Memory Matters

AI systems need memory for three key purposes:
1. Maintaining context during conversations
2. Learning from past interactions
3. Building personalized experiences over time

Without proper memory systems, AI applications would treat each interaction as completely new, losing valuable context and personalization opportunities.

## Short-Term Memory

The most basic form of memory in AI systems holds immediate context - like a person remembering what was just said in a conversation. This includes:

- **Conversation History**: Recent messages and their order
- **Working Memory**: Temporary variables and state
- **Attention Context**: Current focus of the conversation

## Long-Term Memory

More sophisticated AI applications implement long-term memory to retain information across conversations. This includes:

- **Factual Memory**: Stored knowledge about users, preferences, and domain-specific information
- **Episodic Memory**: Past interactions and experiences
- **Semantic Memory**: Understanding of concepts and their relationships

## Memory Characteristics

Each memory type has distinct characteristics:

| Type | Persistence | Access Speed | Use Case |
|------|-------------|--------------|-----------|
| Short-Term | Temporary | Instant | Active conversations |
| Long-Term | Persistent | Fast | User preferences and history |

## How Mem0 Implements Long-Term Memory
Mem0's long-term memory system builds on these foundations by:

1. Using vector embeddings to store and retrieve semantic information
2. Maintaining user-specific context across sessions
3. Implementing efficient retrieval mechanisms for relevant past interactions
8 changes: 8 additions & 0 deletions docs/docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,14 @@
"faqs"
]
},
{
"group": "Core Concepts",
"icon": "brain",
"pages": [
"core-concepts/memory-types",
"core-concepts/memory-operations"
]
},
{
"group": "Platform",
"icon": "cogs",
Expand Down
59 changes: 15 additions & 44 deletions docs/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,55 +8,27 @@ iconType: "solid"
🔔 New Feature: [Webhooks](/features/webhook) are now available! Configure real-time notifications for memory events in your Mem0 project.
</Note>

[Mem0](https://mem0.dev/wd) (pronounced "mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. Mem0 remembers user preferences and traits and continuously updates over time, making it ideal for applications like customer support chatbots and AI assistants.
# Introduction

## Understanding Mem0
[Mem0](https://mem0.dev/wd) (pronounced "mem-zero") enhances AI assistants by giving them persistent, contextual memory. AI systems using Mem0 actively learn from and adapt to user interactions over time.

Mem0, described as "_The Memory Layer for your AI Agents_," leverages advanced LLMs and algorithms to detect, store, and retrieve memories from conversations and interactions. It identifies key information such as facts, user preferences, and other contextual information, smartly updates memories over time by resolving contradictions, and supports the development of an AI Agent that evolves with the user interactions. When needed, Mem0 employs a smart search system to find memories, ranking them based on relevance, importance, and recency to ensure only the most useful information is presented.
Mem0's memory layer combines LLMs with vector based storage. LLMs extract and process key information from conversations, while the vector storage enables efficient semantic search and retrieval of memories. This architecture helps AI agents connect past interactions with current context for more relevant responses.

Mem0 provides multiple endpoints through which users can interact with their memories. The two main endpoints are `add` and `search`. The `add` endpoint lets users ingest their conversations into Mem0, storing them as memories. The `search` endpoint handles retrieval, allowing users to query their set of stored memories.
## Key Features

### ADD Memories
- **Memory Processing**: Uses LLMs to automatically extract and store important information from conversations while maintaining full context
- **Memory Management**: Continuously updates and resolves contradictions in stored information to maintain accuracy
- **Dual Storage Architecture**: Combines vector database for memory storage and graph database for relationship tracking
- **Smart Retrieval System**: Employs semantic search and graph queries to find relevant memories based on importance and recency
- **Simple API Integration**: Provides easy-to-use endpoints for adding (`add`) and retrieving (`search`) memories

<Frame caption="Architecture diagram illustrating the process of adding memories.">
<img src="images/add_architecture.png" />
</Frame>
## Use Cases

When a user has a conversation, Mem0 uses an LLM to understand and extract important information. This model is designed to capture detailed information while maintaining the full context of the conversation.
Here's how the process works:

1. First, the LLM extracts two key elements:
* Relevant memories
* Important entities and their relationships
2. The system then compares this new information with existing data to identify contradictions, if present.
3. A second LLM evaluates the new information and decides whether to:
* Add it as new data
* Update existing information
* Delete outdated information
4. These changes are automatically made to two databases:
* A vector database (for storing memories)
* A graph database (for storing relationships)

This entire process happens continuously with each user interaction, ensuring that the system always maintains an up-to-date understanding of the user's information.

### SEARCH Memories

<Frame caption="Architecture diagram illustrating the memory search process.">
<img src="images/search_architecture.png" />
</Frame>

When a user asks Mem0 a question, the system uses smart memory lookup to find relevant information. Here's how it works:

1. The user submits a question to Mem0
2. The LLM processes this question in two ways:
* It rewrites the question to search the vector database better
* It identifies important entities and their relationships from the question
3. The system then performs two parallel searches:
* It searches the vector database using the rewritten question and semantic search
* It searches the graph database using the identified entities and relationships using graph queries
4. Finally, Mem0 combines the results from both databases to provide a complete answer to the user's question

This approach ensures that Mem0 can find and return all relevant information, whether it's stored as memories in the vector database or as relationships in the graph database.
- **Customer Support Chatbots**: Create support agents that remember customer history, preferences, and past interactions to provide personalized assistance
- **Personal AI Tutors**: Build educational assistants that track student progress, adapt to learning patterns, and provide contextual help
- **Healthcare Applications**: Develop healthcare assistants that maintain patient history and provide personalized care recommendations
- **Enterprise Knowledge Management**: Power systems that learn from organizational interactions and maintain institutional knowledge
- **Personalized AI Assistants**: Create assistants that learn user preferences and adapt their responses over time

## Getting Started
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/quickstart).
Expand All @@ -74,7 +46,6 @@ Mem0 offers two powerful ways to leverage our technology: our [managed platform]
</Card>
</CardGroup>


## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:

Expand Down

0 comments on commit db51295

Please sign in to comment.