-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* add promptflow example * add promptflow example * add newline and sort imports * add newline and sort imports * sort imports * fix format errors * update readme * add ecosystem docs * update broken link * update broken link * removing link to samples folder * update readme
- Loading branch information
Showing
12 changed files
with
492 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
.env | ||
__pycache__/ | ||
.promptflow/* | ||
!.promptflow/flow.tools.json | ||
.runs/ | ||
.cache/ | ||
.vscode/ |
67 changes: 67 additions & 0 deletions
67
samples/apps/promptflow-autogen/.promptflow/flow.tools.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,67 @@ | ||
{ | ||
"package": {}, | ||
"code": { | ||
"chat.jinja2": { | ||
"type": "llm", | ||
"inputs": { | ||
"chat_history": { | ||
"type": [ | ||
"string" | ||
] | ||
}, | ||
"question": { | ||
"type": [ | ||
"string" | ||
] | ||
} | ||
}, | ||
"source": "chat.jinja2" | ||
}, | ||
"autogen_task.py": { | ||
"type": "python", | ||
"inputs": { | ||
"redisConnection": { | ||
"type": [ | ||
"CustomConnection" | ||
] | ||
}, | ||
"question": { | ||
"type": [ | ||
"string" | ||
] | ||
}, | ||
"azureOpenAiConnection": { | ||
"type": [ | ||
"AzureOpenAIConnection" | ||
] | ||
}, | ||
"azureOpenAiModelName": { | ||
"type": [ | ||
"string" | ||
], | ||
"default": "gpt-4-32k" | ||
}, | ||
"autogen_workflow_id": { | ||
"type": [ | ||
"int" | ||
], | ||
"default": "1" | ||
} | ||
}, | ||
"source": "autogen_task.py", | ||
"function": "my_python_tool" | ||
}, | ||
"autogen_workflow.py": { | ||
"type": "python", | ||
"inputs": { | ||
"input1": { | ||
"type": [ | ||
"string" | ||
] | ||
} | ||
}, | ||
"source": "autogen_workflow.py", | ||
"function": "my_python_tool" | ||
} | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
# What is Promptflow | ||
|
||
Promptflow is a comprehensive suite of tools that simplifies the development, testing, evaluation, and deployment of LLM based AI applications. It also supports integration with Azure AI for cloud-based operations and is designed to streamline end-to-end development. | ||
|
||
Refer to [Promptflow docs](https://microsoft.github.io/promptflow/) for more information. | ||
|
||
Quick links: | ||
|
||
- Why use Promptflow - [Link](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow) | ||
- Quick start guide - [Link](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html) | ||
|
||
## Getting Started | ||
|
||
- Install required python packages | ||
|
||
```bash | ||
cd samples/apps/promptflow-autogen | ||
pip install -r requirements.txt | ||
``` | ||
|
||
- This example assumes a working Redis cache service to be available. You can get started locally using this [guide](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) or use your favorite managed service | ||
|
||
## Chat flow | ||
|
||
Chat flow is designed for conversational application development, building upon the capabilities of standard flow and providing enhanced support for chat inputs/outputs and chat history management. With chat flow, you can easily create a chatbot that handles chat input and output. | ||
|
||
## Create connection for LLM tool to use | ||
|
||
You can follow these steps to create a connection required by a LLM tool. | ||
|
||
Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details. | ||
|
||
```bash | ||
# Override keys with --set to avoid yaml file changes | ||
|
||
# Create Azure open ai connection | ||
pf connection create --file azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection | ||
|
||
# Create the custom connection for Redis Cache | ||
pf connection create -f custom_conn.yaml --set secrets.redis_url=<your-redis-connection-url> --name redis_connection_url | ||
# Sample redis connection string rediss://:PASSWORD@redis_host_name.redis.cache.windows.net:6380/0 | ||
``` | ||
|
||
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `aoai_connection` for Azure Open AI and `redis_connection_url` for redis. | ||
|
||
```bash | ||
# show registered connection | ||
pf connection show --name open_ai_connection | ||
``` | ||
|
||
Please refer to connections [document](https://promptflow.azurewebsites.net/community/local/manage-connections.html) and [example](https://github.com/microsoft/promptflow/tree/main/examples/connections) for more details. | ||
|
||
## Develop a chat flow | ||
|
||
The most important elements that differentiate a chat flow from a standard flow are **Chat Input**, **Chat History**, and **Chat Output**. | ||
|
||
- **Chat Input**: Chat input refers to the messages or queries submitted by users to the chatbot. Effectively handling chat input is crucial for a successful conversation, as it involves understanding user intentions, extracting relevant information, and triggering appropriate responses. | ||
|
||
- **Chat History**: Chat history is the record of all interactions between the user and the chatbot, including both user inputs and AI-generated outputs. Maintaining chat history is essential for keeping track of the conversation context and ensuring the AI can generate contextually relevant responses. Chat History is a special type of chat flow input, that stores chat messages in a structured format. | ||
|
||
- NOTE: Currently the sample flows do not send chat history messages to agent workflow. | ||
|
||
- **Chat Output**: Chat output refers to the AI-generated messages that are sent to the user in response to their inputs. Generating contextually appropriate and engaging chat outputs is vital for a positive user experience. | ||
|
||
A chat flow can have multiple inputs, but Chat History and Chat Input are required inputs in chat flow. | ||
|
||
## Interact with chat flow | ||
|
||
Promptflow supports interacting via vscode or via Promptflow CLI provides a way to start an interactive chat session for chat flow. Customer can use below command to start an interactive chat session: | ||
|
||
```bash | ||
pf flow test --flow <flow_folder> --interactive | ||
``` | ||
|
||
## Autogen State Flow | ||
|
||
[Autogen State Flow](./autogen_stateflow.py) contains stateflow example shared at [StateFlow](https://microsoft.github.io/autogen/blog/2024/02/29/StateFlow/) with Promptflow. All the interim messages are sent to Redis channel. You can use these to stream to frontend or take further actions. Output of Prompflow is `summary` message from group chat. | ||
|
||
## Agent Nested Chat | ||
|
||
[Autogen Nested Chat](./agentchat_nestedchat.py) contains Scenario 1 of nested chat example shared at [Nested Chats](https://microsoft.github.io/autogen/docs/notebooks/agentchat_nestedchat) with Promptflow. All the interim messages are sent to Redis channel. You can use these to stream to frontend or take further actions. Output of Prompflow is `summary` message from group chat. | ||
|
||
## Redis for Data cache and Interim Messages | ||
|
||
Autogen supports Redis for [data caching](https://microsoft.github.io/autogen/docs/reference/cache/redis_cache/) and since redis supports a pub-subs model as well, this Promptflow example is configured for all agent callbacks to send messages to a Redis channel. This is optional feature but is essential for long running workflows and provides access to interim messages for your frontend. NOTE: Currently Promtpflow only support [SSE](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) for streaming data and does not support websockets. NOTE: In multi user chat bot environment please make necessary changes to send messages to corresponding channel. |
108 changes: 108 additions & 0 deletions
108
samples/apps/promptflow-autogen/agentchat_nestedchat.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,108 @@ | ||
import json | ||
from typing import Any, Dict, List | ||
|
||
import redis | ||
|
||
import autogen | ||
from autogen import Cache | ||
|
||
|
||
class AgNestedChat: | ||
def __init__(self, redis_url: str, config_list: List[Dict[str, Any]]) -> None: | ||
# Initialize the workflows dictionary | ||
self.workflows = {} | ||
|
||
# Establish a connection to Redis | ||
self.redis_con = redis.from_url(redis_url) | ||
|
||
# Create a Redis cache with a seed of 16 | ||
self.redis_cache = Cache.redis(cache_seed=16, redis_url=redis_url) | ||
|
||
# Store the configuration list | ||
self.config_list = config_list | ||
|
||
# Define the GPT-4 configuration | ||
self.llm_config = { | ||
"cache_seed": False, # change the cache_seed for different trials | ||
"temperature": 0, | ||
"config_list": self.config_list, | ||
"timeout": 120, | ||
} | ||
|
||
# Initialize the writer agent | ||
self.writer = autogen.AssistantAgent( | ||
name="Writer", | ||
llm_config={"config_list": config_list}, | ||
system_message=""" | ||
You are a professional writer, known for your insightful and engaging articles. | ||
You transform complex concepts into compelling narratives. | ||
You should improve the quality of the content based on the feedback from the user. | ||
""", | ||
) | ||
|
||
# Initialize the user proxy agent | ||
self.user_proxy = autogen.UserProxyAgent( | ||
name="User", | ||
human_input_mode="NEVER", | ||
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0, | ||
code_execution_config={ | ||
"last_n_messages": 1, | ||
"work_dir": "tasks", | ||
"use_docker": False, | ||
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly. | ||
) | ||
|
||
# Initialize the critic agent | ||
self.critic = autogen.AssistantAgent( | ||
name="Critic", | ||
llm_config={"config_list": config_list}, | ||
system_message=""" | ||
You are a critic, known for your thoroughness and commitment to standards. | ||
Your task is to scrutinize content for any harmful elements or regulatory violations, ensuring | ||
all materials align with required guidelines. | ||
For code | ||
""", | ||
) | ||
|
||
# Register the reply function for each agent | ||
agents_list = [self.writer, self.user_proxy, self.critic] | ||
for agent in agents_list: | ||
agent.register_reply( | ||
[autogen.Agent, None], | ||
reply_func=self._update_redis, | ||
config={"callback": None}, | ||
) | ||
|
||
def _update_redis(self, recipient, messages=[], sender=None, config=None): | ||
# Publish a message to Redis | ||
mesg = {"sender": sender.name, "receiver": recipient.name, "messages": messages} | ||
self.redis_con.publish("channel:1", json.dumps(mesg)) | ||
return False, None | ||
|
||
def _reflection_message(self, recipient, messages, sender, config): | ||
# Generate a reflection message | ||
print("Reflecting...", "yellow") | ||
return f"Reflect and provide critique on the following writing. \n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}" | ||
|
||
def chat(self, question: str) -> autogen.ChatResult: | ||
# Register nested chats for the user proxy agent | ||
self.user_proxy.register_nested_chats( | ||
[ | ||
{ | ||
"recipient": self.critic, | ||
"message": self._reflection_message, | ||
"summary_method": "last_msg", | ||
"max_turns": 1, | ||
} | ||
], | ||
trigger=self.writer, # condition=my_condition, | ||
) | ||
|
||
# Initiate a chat and return the result | ||
res = self.user_proxy.initiate_chat( | ||
recipient=self.writer, | ||
message=question, | ||
max_turns=2, | ||
summary_method="last_msg", | ||
) | ||
return res |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,120 @@ | ||
import json | ||
import tempfile | ||
from typing import Any, Dict, List | ||
|
||
import redis | ||
|
||
import autogen | ||
from autogen import Cache | ||
from autogen.coding import LocalCommandLineCodeExecutor | ||
|
||
|
||
class AgStateFlow: | ||
def __init__(self, redis_url: str, config_list: List[Dict[str, Any]]) -> None: | ||
# Initialize the workflows dictionary | ||
self.workflows = {} | ||
|
||
# Establish a connection to Redis | ||
self.redis_con = redis.from_url(redis_url) | ||
|
||
# Create a Redis cache with a seed of 16 | ||
self.redis_cache = Cache.redis(cache_seed=16, redis_url=redis_url) | ||
|
||
# Store the configuration list | ||
self.config_list = config_list | ||
|
||
# Create a temporary directory to store the code files | ||
self.temp_dir = tempfile.TemporaryDirectory() | ||
|
||
# Create a local command line code executor with a timeout of 10 seconds | ||
# and use the temporary directory to store the code files | ||
self.local_executor = LocalCommandLineCodeExecutor(timeout=10, work_dir=self.temp_dir.name) | ||
|
||
# Define the GPT-4 configuration | ||
self.gpt4_config = { | ||
"cache_seed": False, | ||
"temperature": 0, | ||
"config_list": self.config_list, | ||
"timeout": 120, | ||
} | ||
# Initialize the agents | ||
self.initializer = autogen.UserProxyAgent( | ||
name="Init", | ||
code_execution_config=False, | ||
) | ||
self.coder = autogen.AssistantAgent( | ||
name="Retrieve_Action_1", | ||
llm_config=self.gpt4_config, | ||
system_message="""You are the Coder. Given a topic, write code to retrieve related papers from the arXiv API, print their title, authors, abstract, and link. | ||
You write python/shell code to solve tasks. Wrap the code in a code block that specifies the script type. The user can't modify your code. So do not suggest incomplete code which requires others to modify. Don't use a code block if it's not intended to be executed by the executor. | ||
Don't include multiple code blocks in one response. Do not ask others to copy and paste the result. Check the execution result returned by the executor. | ||
If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try. | ||
""", | ||
) | ||
self.executor = autogen.UserProxyAgent( | ||
name="Retrieve_Action_2", | ||
system_message="Executor. Execute the code written by the Coder and report the result.", | ||
human_input_mode="NEVER", | ||
code_execution_config={"executor": self.local_executor}, | ||
) | ||
self.scientist = autogen.AssistantAgent( | ||
name="Research_Action_1", | ||
llm_config=self.gpt4_config, | ||
system_message="""You are the Scientist. Please categorize papers after seeing their abstracts printed and create a markdown table with Domain, Title, Authors, Summary and Link""", | ||
) | ||
|
||
# Create the workflow | ||
self.create_workflow() | ||
|
||
def _state_transition(self, last_speaker, groupchat): | ||
messages = groupchat.messages | ||
|
||
# Define the state transitions | ||
if last_speaker is self.initializer: | ||
# init -> retrieve | ||
return self.coder | ||
elif last_speaker is self.coder: | ||
# retrieve: action 1 -> action 2 | ||
return self.executor | ||
elif last_speaker is self.executor: | ||
if messages[-1]["content"] == "exitcode: 1": | ||
# retrieve --(execution failed)--> retrieve | ||
return self.coder | ||
else: | ||
# retrieve --(execution success)--> research | ||
return self.scientist | ||
elif last_speaker == "Scientist": | ||
# research -> end | ||
return None | ||
|
||
def _update_redis(self, recipient, messages=[], sender=None, config=None): | ||
# Publish a message to Redis | ||
mesg = {"sender": sender.name, "receiver": recipient.name, "messages": messages} | ||
self.redis_con.publish("channel:1", json.dumps(mesg)) | ||
return False, None | ||
|
||
def create_workflow(self): | ||
# Register the reply function for each agent | ||
agents_list = [self.initializer, self.coder, self.executor, self.scientist] | ||
for agent in agents_list: | ||
agent.register_reply( | ||
[autogen.Agent, None], | ||
reply_func=self._update_redis, | ||
config={"callback": None}, | ||
) | ||
|
||
# Create a group chat with the agents and define the speaker selection method | ||
self.groupchat = autogen.GroupChat( | ||
agents=agents_list, | ||
messages=[], | ||
max_round=20, | ||
speaker_selection_method=self._state_transition, | ||
) | ||
|
||
# Create a group chat manager | ||
self.manager = autogen.GroupChatManager(groupchat=self.groupchat, llm_config=self.gpt4_config) | ||
|
||
def chat(self, question: str): | ||
# Initiate a chat and return the result | ||
chat_result = self.initializer.initiate_chat(self.manager, message=question, cache=self.redis_cache) | ||
return chat_result |
Oops, something went wrong.