-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added a demonstartion notebook featuring the usage of Langchain with AutoGen #3461
Conversation
@microsoft-github-policy-service agree |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we would love to see the generated chat too, can you run this and push the changes, so it doesnt trouble developers using the notebook directly.
@Kirushikesh thanks for adding the notebook.
Hey @Kirushikesh, thanks for creating this. Can I just clarify, it seems more that this is the use of LangChain's huggingface library as opposed to integration with LangChain? Do you think a huggingface client class would be a similar outcome in this approach? It has been mentioned before on AutoGen's Discord and it may be worth creating one. |
Hello @marklysze , for the first question no its not about langchain-huggingface instead its how to use Langchain with Autogen. Currently Langchain supports atmost all the LLM's out there from various the LLM Providers. We can load any LLM through Langchain which creates an abstraction and then use that LLM Class(BaseLanguageModel) with Autogen agentic capabilities. For demonstration i have selected HuggingFace in the notebook. We can literally use it with any LLM supported in Langchain, for ex: the below code explains how to use OpenAI LLM loaded through Langchain in Autogen, yes i know we can use the OpenAI model directly this is just for the demonstration of langchain. import os
import json
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
load_dotenv()
class CustomModelClientWithArguments(CustomModelClient):
def __init__(self, config, **kwargs):
print(f"CustomModelClientWithArguments config: {config}")
self.model_name = config["model"]
gen_config_params = config.get("params", {})
self.model = ChatOpenAI(model=self.model_name, **gen_config_params)
print(f"Loaded model {config['model']}")
os.environ["OAI_CONFIG_LIST"] = json.dumps([{
"model": "gpt-4o-mini",
"model_client_cls": "CustomModelClientWithArguments",
"n": 1,
"params": {
"max_tokens": 100,
"top_p": 1,
"temperature": 0.1,
"max_retries": 2,
}
}])
config_list_custom = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={"model_client_cls": ["CustomModelClientWithArguments"]},
)
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list_custom})
assistant.register_model_client(
model_client_cls=CustomModelClientWithArguments,
)
user_proxy.initiate_chat(assistant, message="Write python code to print Hello World!") For the second question, i am not sure which huggingface client class you are mentioning about. Let me know if you have any further queries :) |
Thanks @Kirushikesh, would you be able to test some non-OpenAI models, such as Anthropic's and Meta's Llamas. The role and name fields aren't always compatible with AutoGen messages. |
Hello @marklysze, unfortunately i don't have anthropic api-keys may be you can help me with this. For Meta Llama do you mean using the model through HuggingFace? If yes, the notebook already demonstrate how to load any model from huggingface, its just about changing Just to demonstrate the use of Non-OpenAI model with Langchain, i have used Google Gemini Model loading through langchain here which also works.
Output:
Let me know if you have any additional questions. |
Co-authored-by: gagb <[email protected]>
For some reason I am not able to request a review from @marklysze -- double checking what happened there. |
Thanks @gagb, it may be that my permissions have changed? It looks like I can still add a review. I'll try to do a review shortly. |
Thats weird-- double checking asap! We are having slightly trouble accessing repo management – will add you asap. |
Thanks @Kirushikesh for the Gemini code sample, that worked for me. Unfortunately the notebook code isn't working for me. It's not getting past this block of code in the
It is pulling down the tensors but just stops at that point, without an exception. I'm not sure why. I'm running in AutoGen's dev container (docker). Though I ran the Gemini code above and it worked okay. Perhaps it's when pulling a model down. Is there another model you have had success with (that pulls down to run locally)? |
@marklysze i am not sure what's the issue, its literally loading the model from huggingface. I have tried with |
@marklysze -- your role had auto-expired, so sorry about this! I believe it was restored can you please check? @jackgerrits told me that you need to accept some invite? |
Sorry about the auto-expire! As soon as you accept I can double check the role too |
Thanks @gagb and @jackgerrits, got it and I've accepted, if you can check the role that would be appreciated :) |
Thanks @Kirushikesh, I'll give Mistral a go... It's downloading the models, yep (e.g. Llama 3.1 8B downloaded almost 20GB) |
All sorted now! |
Looks good, thanks! |
Sorry @Kirushikesh, I had limited success in getting it running in Docker or my Ubuntu environment. I tried Llama 3.1 8B and Mistral 7B v0.3. Would you be able to test using a couple of my test python scripts under https://github.com/marklysze/AutoGenClientTesting? |
@marklysze , sorry i am not sure if there is some issue in my code. I can resolve if you point any, So you want me to use these scripts test_calc.py, test_chess.py to test my current notebook right?. Also these scripts requires tool-calling capabilities which is not possible with llama or mistral models in huggingface ig if i am right. Can you please tell me whats the error you are facing when running in the docker/ur ubuntu environment? |
As I can't test it (it's literally just stopping, no exceptions), I can't check that the custommodelclass works in various scenarios. It seems that tool calling is available, though I don't have any experience using the huggingface API. So, it would be good for you/someone to test various AutoGen workflows and see how they go. |
me neither have experience with the HuggingFace API. @Kirushikesh let us know your thoughts on exploring the tool call. |
@Kirushikesh also run the command |
Hello @marklysze, sorry for the delay in addressing the issue. First, i have updated the notebook by the following changes:
(I remember you mentioned that you have tried Llama3.1 and Mistral and the notebook terminated, both are gated models may be you have not logged in caused the issue) Testing the notebook on AutoGenClientTesting programs:
Can you please let me know if i need to perform any other analysis. |
The Code to test the CODE GENERATION AND EXECUTION (test_code_gen.py) from pathlib import Path
from autogen import AssistantAgent, UserProxyAgent
from autogen.coding import LocalCommandLineCodeExecutor
# Setting up the code executor
workdir = Path("coding")
workdir.mkdir(exist_ok=True)
code_executor = LocalCommandLineCodeExecutor(work_dir=workdir)
# Setting up the agents
# The UserProxyAgent will execute the code that the AssistantAgent provides
user_proxy_agent = UserProxyAgent(
name="User",
code_execution_config={"executor": code_executor},
is_termination_msg=lambda msg: "FINISH" in msg.get("content"),
)
system_message = """You are a helpful AI assistant who writes code and the user executes it.
Solve tasks using your coding and language skills.
In the following cases, suggest python code (in a python coding block) for the user to execute.
Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.
When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user.
Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user.
If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.
When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.
IMPORTANT: Wait for the user to execute your code and then you can reply with the word "FINISH". DO NOT OUTPUT "FINISH" after your code block."""
# The AssistantAgent, using Huggingface model, will take the coding request and return code
assistant_agent = AssistantAgent(
name="Together Assistant",
system_message=system_message,
llm_config={"config_list": config_list_custom},
)
assistant_agent.register_model_client(model_client_cls=CustomModelClient)
# Start the chat, with the UserProxyAgent asking the AssistantAgent the message
chat_result = user_proxy_agent.initiate_chat(
assistant_agent,
message="Provide code to count the number of prime numbers from 1 to 10000.",
) The Output:
|
The Code to test the LLM Reflection (test_reflection.py) writer = AssistantAgent(
name="Writer",
llm_config={"config_list": config_list_custom},
system_message="""
You are a professional writer, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.
You should imporve the quality of the content based on the feedback from the user.
""",
)
user_proxy = UserProxyAgent(
name="User",
human_input_mode="NEVER",
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
code_execution_config={
"last_n_messages": 1,
"work_dir": "tasks",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
)
critic = AssistantAgent(
name="Critic",
llm_config={"config_list": config_list_custom},
system_message="""
You are a critic, known for your thoroughness and commitment to standards.
Your task is to scrutinize content for any harmful elements or regulatory violations, ensuring
all materials align with required guidelines.
For code
""",
)
def reflection_message(recipient, messages, sender, config):
print("Reflecting...", "yellow")
return f"Reflect and provide critique on the following writing. \n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}"
writer.register_model_client(model_client_cls=CustomModelClient)
critic.register_model_client(model_client_cls=CustomModelClient)
task = """Write a concise but engaging blogpost about Navida."""
user_proxy.register_nested_chats(
[{"recipient": critic, "message": reflection_message, "summary_method": "last_msg", "max_turns": 1}],
trigger=writer,
)
res = user_proxy.initiate_chat(recipient=writer, message=task, max_turns=2, summary_method="last_msg") The Output:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR.
@ekzhu please merge the PR. i dont have the access. Thanks @Kirushikesh for the PR. |
This PR is against AutoGen 0.2. AutoGen 0.2 has been moved to the 0.2 branch. Please rebase your PR on the 0.2 branch or update it to work with the new AutoGen 0.4 that is now in main. |
@rysweet i have rebased the PR to the autogen 0.2 branch. Please review. |
Why are these changes needed?
I was trying to use AutoGen powerful agentic framework with the other LLM's but i have seen autogen explicitly supports fewer LLM Providers by default and for the others you need to write a custom configuration, that too is difficult to design for different LLM's/LLM Provider's out there. Since the existing library like
LangChain
handles the tedious task of LLM compatibility, ig to have this notebook to show the how to use Langchain library with Autogen opens up the space for all the LLM supported by langchain used with Autogen. I have found this related notebook agentchat_langchain.ipynb which shows the langchain with autogen but i feel its not very simple and clear for the new users to understand.Checks