From b088ae6ecb9c3eec94ad9bafceeb413b26946827 Mon Sep 17 00:00:00 2001 From: Burhanuddin Mustafa Lakdawala Date: Thu, 11 Apr 2024 09:07:18 -0700 Subject: [PATCH] fix markdown for long context user guide https://microsoft.github.io/autogen/docs/topics/long_contexts/ --- website/docs/topics/long_contexts.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/website/docs/topics/long_contexts.md b/website/docs/topics/long_contexts.md index bba36b570c7c..0d8676191044 100644 --- a/website/docs/topics/long_contexts.md +++ b/website/docs/topics/long_contexts.md @@ -12,7 +12,6 @@ Why do we need to handle long contexts? The problem arises from several constrai The `TransformMessages` capability is designed to modify incoming messages before they are processed by the LLM agent. This can include limiting the number of messages, truncating messages to meet token limits, and more. -````{=mdx} :::info Requirements Install `pyautogen`: ```bash @@ -21,7 +20,6 @@ pip install pyautogen For more information, please refer to the [installation guide](/docs/installation/). ::: -```` ### Exploring and Understanding Transformations @@ -114,11 +112,9 @@ user_proxy = autogen.UserProxyAgent( ) ``` -```{=mdx} :::tip Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration). ::: -``` We first need to write the `test` function that creates a very long chat history by exchanging messages between an assistant and a user proxy agent, and then attempts to initiate a new chat without clearing the history, potentially triggering an error due to token limits.