Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama Client (with tool calling) #3056

Merged
merged 26 commits into from
Oct 1, 2024
Merged

Ollama Client (with tool calling) #3056

merged 26 commits into from
Oct 1, 2024

Conversation

marklysze
Copy link
Collaborator

@marklysze marklysze commented Jul 1, 2024

An Ollama client! Run your local models with AutoGen using a dedicated client class.

One of the key features of this library (and which is still very much experimental) is the support for tool calling. This is done "manually" by injecting tools into the prompt and translating between AutoGen's tool call objects and text messages (updated to also support Ollama's native tool calling). The use of tool calling will be described in further detail below but essentially you should be able to get up and running with it as it stands without customising the text injected to support it.

This manual tool calling approach and the actual text injected is an initial attempt to handle tool calling so if you can help improve it, please do!

I'll use the client with some notebooks and local models and summarise the results in another comment.

To run the code, you'll need to install the ollama and fix-busted-json packages (these will be automatically installed when this is merged and you install through pip install pyautogen[ollama]):
pip install ollama
pip install fix-busted-json

2024-07-27: Updated to include Ollama's native tool calling (just released, v0.3.0 Ollama library)

Related issue numbers

#2893

Checks

@marklysze marklysze added the models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) label Jul 1, 2024
@marklysze marklysze self-assigned this Jul 1, 2024
@marklysze
Copy link
Collaborator Author

marklysze commented Jul 1, 2024

Basic program:

# THIS TESTS: TWO AGENTS WITH TERMINATION

altmodel_llm_config = {
    "config_list":
    [
        {
            "api_type": "ollama",
            "model": "llama3:8b-instruct-q6_K",
            "client_host": "http://192.168.0.1:11434",
            "seed": 42
        }
    ]
}

from autogen import ConversableAgent

jack = ConversableAgent(
    "Jack",
    llm_config=altmodel_llm_config,
    system_message="Your name is Jack and you are a comedian in a two-person comedy show.",
    is_termination_msg=lambda x: True if "FINISH" in x["content"] else False
)
emma = ConversableAgent(
    "Emma",
    llm_config=altmodel_llm_config,
    system_message="Your name is Emma and you are a comedian in two-person comedy show. Say the word FINISH ONLY AFTER you've heard 2 of Jack's jokes.",
    is_termination_msg=lambda x: True if "FINISH" in x["content"] else False
)

chat_result = jack.initiate_chat(emma, message="Emma, tell me a joke about goldfish and peanut butter.", max_turns=10)

Add "stream": True to the config to use streaming.

@marklysze
Copy link
Collaborator Author

Tool calling:

import autogen
from typing import Literal
from typing_extensions import Annotated

# THIS TESTS: TOOL CALLING

altmodel_llm_config = {
    "config_list":
    [
        {
            "api_type": "ollama",
            "model": "llama3:8b-instruct-q6_K",
            "client_host": "http://192.168.0.1:11434",
            "seed": 43,
            "cache_seed": None
        }
    ]
}

# Create the agent and include examples of the function calling JSON in the prompt
# to help guide the model
chatbot = autogen.AssistantAgent(
    name="chatbot",
    system_message="For currency exchange tasks, "
        "only use the functions you have been provided with.",
    llm_config=altmodel_llm_config,
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    is_termination_msg=lambda x: x.get("content", "") and "TERMINATE" in x.get("content", ""),
    human_input_mode="NEVER",
    max_consecutive_auto_reply=1,
)

CurrencySymbol = Literal["USD", "EUR"]

# Define our function that we expect to call
def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:
    if base_currency == quote_currency:
        return 1.0
    elif base_currency == "USD" and quote_currency == "EUR":
        return 1 / 1.1
    elif base_currency == "EUR" and quote_currency == "USD":
        return 1.1
    else:
        raise ValueError(f"Unknown currencies {base_currency}, {quote_currency}")

# Register the function with the agent
@user_proxy.register_for_execution()
@chatbot.register_for_llm(description="Currency exchange calculator.")
def currency_calculator(
    base_amount: Annotated[float, "Amount of currency in base_currency"],
    base_currency: Annotated[CurrencySymbol, "Base currency"] = "USD",
    quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR",
) -> str:
    quote_amount = exchange_rate(base_currency, quote_currency) * base_amount
    return f"{format(quote_amount, '.2f')} {quote_currency}"

# start the conversation
res = user_proxy.initiate_chat(
    chatbot,
    message="How much is 123.45 EUR in USD?",
    summary_method="reflection_with_llm",
)

print(f"SUMMARY: {res.summary['content']}")

and result:

user_proxy (to chatbot):

How much is 123.45 EUR in USD?

--------------------------------------------------------------------------------
chatbot (to user_proxy):


***** Suggested tool call (ollama_func_3384): currency_calculator *****
Arguments: 
{"base_amount": 123.45, "base_currency": "EUR", "quote_currency": "USD"}
***********************************************************************

--------------------------------------------------------------------------------

>>>>>>>> EXECUTING FUNCTION currency_calculator...
user_proxy (to chatbot):

user_proxy (to chatbot):

***** Response from calling tool (ollama_func_3384) *****
135.80 USD
*********************************************************

--------------------------------------------------------------------------------
chatbot (to user_proxy):

The result is 135.80 USD.

--------------------------------------------------------------------------------
SUMMARY: 123.45 EUR is equivalent to 135.80 USD.

@marklysze
Copy link
Collaborator Author

marklysze commented Jul 1, 2024

Parallel tool calling (LLM recommends multiple tool calls at a time):

import os
import autogen
import json
from typing import Literal
from typing_extensions import Annotated

# THIS TESTS: PARALLEL TOOL CALLING

altmodel_llm_config = {
    "config_list":
    [
        {
            "api_type": "ollama",
            "model": "llama3:8b-instruct-q6_K",
            "client_host": "http://192.168.0.1:11434",
            "seed": 43,
            "cache_seed": None,
            "hide_tools": "if_all_run"
        }
    ]
}

# Create the agent and include examples of the function calling JSON in the prompt
# to help guide the model
chatbot = autogen.AssistantAgent(
    name="chatbot",
    system_message="For currency exchange and weather forecasting tasks, "
        "only use the functions you have been provided with.",
    llm_config=altmodel_llm_config,
)


user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    is_termination_msg=lambda x: x.get("content", "") and "TERMINATE" in x.get("content", ""),
    human_input_mode="NEVER",
    max_consecutive_auto_reply=1,
)

# Currency Exchange function

CurrencySymbol = Literal["USD", "EUR"]

# Define our function that we expect to call
def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:
    if base_currency == quote_currency:
        return 1.0
    elif base_currency == "USD" and quote_currency == "EUR":
        return 1 / 1.1
    elif base_currency == "EUR" and quote_currency == "USD":
        return 1.1
    else:
        raise ValueError(f"Unknown currencies {base_currency}, {quote_currency}")

# Register the function with the agent
@user_proxy.register_for_execution()
@chatbot.register_for_llm(description="Currency exchange calculator.")
def currency_calculator(
    base_amount: Annotated[float, "Amount of currency in base_currency"],
    base_currency: Annotated[CurrencySymbol, "Base currency"] = "USD",
    quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR",
) -> str:
    quote_amount = exchange_rate(base_currency, quote_currency) * base_amount
    return f"{format(quote_amount, '.2f')} {quote_currency}"


# Weather function

# Example function to make available to model
def get_current_weather(location, unit="fahrenheit"):
    """Get the weather for some location"""
    if "chicago" in location.lower():
        return json.dumps({"location": "Chicago", "temperature": "13", "unit": unit})
    elif "san francisco" in location.lower():
        return json.dumps({"location": "San Francisco", "temperature": "55", "unit": unit})
    elif "new york" in location.lower():
        return json.dumps({"location": "New York", "temperature": "11", "unit": unit})
    else:
        return json.dumps({"location": location, "temperature": "unknown"})

# Register the function with the agent
@user_proxy.register_for_execution()
@chatbot.register_for_llm(description="Weather forecast for US cities.")
def weather_forecast(
    location: Annotated[str, "City name"],
) -> str:
    weather_details = get_current_weather(location=location)
    weather = json.loads(weather_details)
    return f"{weather['location']} will be {weather['temperature']} degrees {weather['unit']}"

# start the conversation
res = user_proxy.initiate_chat(
    chatbot,
    message="What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday?",
    summary_method="reflection_with_llm",
)

print(f"SUMMARY: {res.summary['content']}")

and result:

user_proxy (to chatbot):

What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday?

--------------------------------------------------------------------------------
chatbot (to user_proxy):


***** Suggested tool call (ollama_func_5948): weather_forecast *****
Arguments: 
{"location": "New York"}
********************************************************************
***** Suggested tool call (ollama_func_5949): currency_calculator *****
Arguments: 
{"base_amount": 123.45, "base_currency": "EUR", "quote_currency": "USD"}
***********************************************************************

--------------------------------------------------------------------------------

>>>>>>>> EXECUTING FUNCTION weather_forecast...

>>>>>>>> EXECUTING FUNCTION currency_calculator...
user_proxy (to chatbot):

user_proxy (to chatbot):

***** Response from calling tool (ollama_func_5948) *****
New York will be 11 degrees fahrenheit
*********************************************************

--------------------------------------------------------------------------------
user_proxy (to chatbot):

***** Response from calling tool (ollama_func_5949) *****
135.80 USD
*********************************************************

--------------------------------------------------------------------------------
chatbot (to user_proxy):

It will be 11 degrees Fahrenheit in New York and $135.80 is the equivalent of €123.45 in USD, making it a suitable amount to spend on your holiday.

--------------------------------------------------------------------------------
SUMMARY: New York will be 11 degrees Fahrenheit. €123.45 is equivalent to $135.80 in USD.

@ekzhu
Copy link
Collaborator

ekzhu commented Oct 1, 2024

Fixing issues with openai environment #3587

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants