|
| 1 | +This example shows how to use function call with local LLM models where [Ollama](https://ollama.com/) as local model provider and [LiteLLM](https://docs.litellm.ai/docs/) proxy server which provides an openai-api compatible interface. |
| 2 | + |
| 3 | +[](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs) |
| 4 | + |
| 5 | +To run this example, the following prerequisites are required: |
| 6 | +- Install [Ollama](https://ollama.com/) and [LiteLLM](https://docs.litellm.ai/docs/) on your local machine. |
| 7 | +- A local model that supports function call. In this example `dolphincoder:latest` is used. |
| 8 | + |
| 9 | +## Install Ollama and pull `dolphincoder:latest` model |
| 10 | +First, install Ollama by following the instructions on the [Ollama website](https://ollama.com/). |
| 11 | + |
| 12 | +After installing Ollama, pull the `dolphincoder:latest` model by running the following command: |
| 13 | +```bash |
| 14 | +ollama pull dolphincoder:latest |
| 15 | +``` |
| 16 | + |
| 17 | +## Install LiteLLM and start the proxy server |
| 18 | + |
| 19 | +You can install LiteLLM by following the instructions on the [LiteLLM website](https://docs.litellm.ai/docs/). |
| 20 | +```bash |
| 21 | +pip install 'litellm[proxy]' |
| 22 | +``` |
| 23 | + |
| 24 | +Then, start the proxy server by running the following command: |
| 25 | + |
| 26 | +```bash |
| 27 | +litellm --model ollama_chat/dolphincoder --port 4000 |
| 28 | +``` |
| 29 | + |
| 30 | +This will start an openai-api compatible proxy server at `http://localhost:4000`. You can verify if the server is running by observing the following output in the terminal: |
| 31 | + |
| 32 | +```bash |
| 33 | +#------------------------------------------------------------# |
| 34 | +# # |
| 35 | +# 'The worst thing about this product is...' # |
| 36 | +# https://github.com/BerriAI/litellm/issues/new # |
| 37 | +# # |
| 38 | +#------------------------------------------------------------# |
| 39 | + |
| 40 | +INFO: Application startup complete. |
| 41 | +INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit) |
| 42 | +``` |
| 43 | + |
| 44 | +## Install AutoGen and AutoGen.SourceGenerator |
| 45 | +In your project, install the AutoGen and AutoGen.SourceGenerator package using the following command: |
| 46 | + |
| 47 | +```bash |
| 48 | +dotnet add package AutoGen |
| 49 | +dotnet add package AutoGen.SourceGenerator |
| 50 | +``` |
| 51 | + |
| 52 | +The `AutoGen.SourceGenerator` package is used to automatically generate type-safe `FunctionContract` instead of manually defining them. For more information, please check out [Create type-safe function](Create-type-safe-function-call.md). |
| 53 | + |
| 54 | +And in your project file, enable structural xml document support by setting the `GenerateDocumentationFile` property to `true`: |
| 55 | + |
| 56 | +```xml |
| 57 | +<PropertyGroup> |
| 58 | + <!-- This enables structural xml document support --> |
| 59 | + <GenerateDocumentationFile>true</GenerateDocumentationFile> |
| 60 | +</PropertyGroup> |
| 61 | +``` |
| 62 | + |
| 63 | +## Define `WeatherReport` function and create @AutoGen.Core.FunctionCallMiddleware |
| 64 | + |
| 65 | +Create a `public partial` class to host the methods you want to use in AutoGen agents. The method has to be a `public` instance method and its return type must be `Task<string>`. After the methods are defined, mark them with `AutoGen.Core.FunctionAttribute` attribute. |
| 66 | + |
| 67 | +[!code-csharp[Define WeatherReport function](../../sample/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs?name=Function)] |
| 68 | + |
| 69 | +Then create a @AutoGen.Core.FunctionCallMiddleware and add the `WeatherReport` function to the middleware. The middleware will pass the `FunctionContract` to the agent when generating a response, and process the tool call response when receiving a `ToolCallMessage`. |
| 70 | +[!code-csharp[Define WeatherReport function](../../sample/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs?name=Create_tools)] |
| 71 | + |
| 72 | +## Create @AutoGen.OpenAI.OpenAIChatAgent with `GetWeatherReport` tool and chat with it |
| 73 | + |
| 74 | +Because LiteLLM proxy server is openai-api compatible, we can use @AutoGen.OpenAI.OpenAIChatAgent to connect to it as a third-party openai-api provider. The agent is also registered with a @AutoGen.Core.FunctionCallMiddleware which contains the `WeatherReport` tool. Therefore, the agent can call the `WeatherReport` tool when generating a response. |
| 75 | + |
| 76 | +[!code-csharp[Create an agent with tools](../../sample/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs?name=Create_Agent)] |
| 77 | + |
| 78 | +The reply from the agent will similar to the following: |
| 79 | +```bash |
| 80 | +AggregateMessage from assistant |
| 81 | +-------------------- |
| 82 | +ToolCallMessage: |
| 83 | +ToolCallMessage from assistant |
| 84 | +-------------------- |
| 85 | +- GetWeatherAsync: {"city": "new york"} |
| 86 | +-------------------- |
| 87 | + |
| 88 | +ToolCallResultMessage: |
| 89 | +ToolCallResultMessage from assistant |
| 90 | +-------------------- |
| 91 | +- GetWeatherAsync: The weather in new york is 72 degrees and sunny. |
| 92 | +-------------------- |
| 93 | +``` |
0 commit comments