diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/.dockerignore b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/.dockerignore new file mode 100644 index 0000000000..bdb7cbb34f --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/.dockerignore @@ -0,0 +1,51 @@ +# Build artifacts +bin/ +obj/ + +# IDE and editor files +.vs/ +.vscode/ +*.user +*.suo +.foundry/ + +# Source control +.git/ + +# Documentation +README.md + +# Ignore files +.gitignore +.dockerignore + +# Logs +*.log + +# Temporary files +*.tmp +*.temp + +# OS files +.DS_Store +Thumbs.db + +# Package manager directories +node_modules/ +packages/ + +# Test results +TestResults/ +*.trx + +# Coverage reports +coverage/ +*.coverage +*.coveragexml + +# Local development config +appsettings.Development.json +.env + +.venv/ +__pycache__/ diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/.env.sample b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/.env.sample new file mode 100644 index 0000000000..7a7d4d5ec3 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/.env.sample @@ -0,0 +1,3 @@ +# IMPORTANT: Never commit .env to version control - add it to .gitignore +PROJECT_ENDPOINT= +MODEL_DEPLOYMENT_NAME= \ No newline at end of file diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/Dockerfile b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/Dockerfile new file mode 100644 index 0000000000..0cc939d9b3 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/Dockerfile @@ -0,0 +1,16 @@ +FROM python:3.12-slim + +WORKDIR /app + +COPY . user_agent/ +WORKDIR /app/user_agent + +RUN if [ -f requirements.txt ]; then \ + pip install -r requirements.txt; \ + else \ + echo "No requirements.txt found"; \ + fi + +EXPOSE 8088 + +CMD ["python", "main.py"] diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/README.md b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/README.md new file mode 100644 index 0000000000..beb70ec982 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/README.md @@ -0,0 +1,159 @@ +**IMPORTANT!** All samples and other resources made available in this GitHub repository ("samples") are designed to assist in accelerating development of agents, solutions, and agent workflows for various scenarios. Review all provided resources and carefully test output behavior in the context of your use case. AI responses may be inaccurate and AI actions should be monitored with human oversight. Learn more in the transparency documents for [Agent Service](https://learn.microsoft.com/en-us/azure/ai-foundry/responsible-ai/agents/transparency-note) and [Agent Framework](https://github.com/microsoft/agent-framework/blob/main/TRANSPARENCY_FAQ.md). + +Agents, solutions, or other output you create may be subject to legal and regulatory requirements, may require licenses, or may not be suitable for all industries, scenarios, or use cases. By using any sample, you are acknowledging that any output created using those samples are solely your responsibility, and that you will comply with all applicable laws, regulations, and relevant safety standards, terms of service, and codes of conduct. + +Third-party samples contained in this folder are subject to their own designated terms, and they have not been tested or verified by Microsoft or its affiliates. + +Microsoft has no responsibility to you or others with respect to any of these samples or any resulting output. + +# What this sample demonstrates + +This sample demonstrates a **key advantage of code-based hosted agents**: + +- **Agents in Workflows** - Use AI agents as executors within a workflow pipeline + +Code-based agents can execute **any Python code** you write. This sample includes a multi-agent workflow where Writer and Reviewer agents collaborate to draft content and provide review feedback. + +The agent is hosted using the [Azure AI AgentServer SDK](https://pypi.org/project/azure-ai-agentserver-agentframework/) and can be deployed to Microsoft Foundry using the Azure Developer CLI. + +## How It Works + +### Agents in Workflows + +This sample demonstrates the integration of AI agents within a workflow pipeline. The workflow operates as follows: + +1. **Writer Agent** - Drafts content +2. **Reviewer Agent** - Reviews the draft and provides concise, actionable feedback + +### Agent Hosting + +The agent workflow is hosted using the [Azure AI AgentServer SDK](https://pypi.org/project/azure-ai-agentserver-agentframework/), +which provisions a REST API endpoint compatible with the OpenAI Responses protocol. + +### Agent Deployment + +The hosted agent workflow can be deployed to Microsoft Foundry using the Azure Developer CLI [ai agent](https://learn.microsoft.com/en-us/azure/ai-foundry/agents/concepts/hosted-agents?view=foundry&tabs=cli#create-a-hosted-agent) extension. + +## Running the Agent Locally + +### Prerequisites + +Before running this sample, ensure you have: + +1. **Azure AI Foundry Project** + - Project created in [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-foundry?view=foundry#microsoft-foundry-portals) + - Chat model deployed (e.g., `gpt-4o` or `gpt-4.1`) + - Note your project endpoint URL and model deployment name + +2. **Azure CLI** + - Installed and authenticated + - Run `az login` and verify with `az account show` + +3. **Python 3.10 or higher** + - Verify your version: `python --version` + - If you have Python 3.9 or older, install a newer version: + - Windows: `winget install Python.Python.3.12` + - macOS: `brew install python@3.12` + - Linux: Use your package manager + +### Environment Variables + +Set the following environment variables (matching `agent.yaml`): + +- `PROJECT_ENDPOINT` - Your Azure AI Foundry project endpoint URL (required) +- `MODEL_DEPLOYMENT_NAME` - The deployment name for your chat model (defaults to `gpt-4.1-mini`) + +This sample loads environment variables from a local `.env` file if present. + +Create a `.env` file in this directory with the following content: + +``` +PROJECT_ENDPOINT=https://.services.ai.azure.com/api/projects/ +MODEL_DEPLOYMENT_NAME=gpt-4.1-mini +``` + +Or set them via PowerShell: + +```powershell +# Replace with your actual values +$env:PROJECT_ENDPOINT="https://.services.ai.azure.com/api/projects/" +$env:MODEL_DEPLOYMENT_NAME="gpt-4.1-mini" +``` + +### Setting Up a Virtual Environment + +It's recommended to use a virtual environment to isolate project dependencies: + +**macOS/Linux:** + +```bash +python -m venv .venv +source .venv/bin/activate +``` + +**Windows (PowerShell):** + +```powershell +python -m venv .venv +.\.venv\Scripts\Activate.ps1 +``` + +### Installing Dependencies + +Install the required Python dependencies using pip: + +```bash +pip install -r requirements.txt +``` + +### Running the Sample + +To run the agent, execute the following command in your terminal: + +```powershell +python main.py +``` + +This will start the hosted agent locally on `http://localhost:8088/`. + +### Interacting with the Agent + +**PowerShell (Windows):** + +```powershell +$body = @{ + input = "Create a slogan for a new electric SUV that is affordable and fun to drive." + stream = $false +} | ConvertTo-Json + +Invoke-RestMethod -Uri http://localhost:8088/responses -Method Post -Body $body -ContentType "application/json" +``` + +**Bash/curl (Linux/macOS):** + +```bash +curl -sS -H "Content-Type: application/json" -X POST http://localhost:8088/responses \ + -d '{"input": "Create a slogan for a new electric SUV that is affordable and fun to drive.","stream":false}' +``` + +### Deploying the Agent to Microsoft Foundry + +To deploy your agent to Microsoft Foundry, follow the comprehensive deployment guide at https://learn.microsoft.com/en-us/azure/ai-foundry/agents/concepts/hosted-agents?view=foundry&tabs=cli + +## Troubleshooting + +### Images built on Apple Silicon or other ARM64 machines do not work on our service + +We **recommend using `azd` cloud build**, which always builds images with the correct architecture. + +If you choose to **build locally**, and your machine is **not `linux/amd64`** (for example, an Apple Silicon Mac), the image will **not be compatible with our service**, causing runtime failures. + +**Fix for local builds** + +Use this command to build the image locally: + +```shell +docker build --platform=linux/amd64 -t image . +``` + +This forces the image to be built for the required `amd64` architecture. diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/agent.yaml b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/agent.yaml new file mode 100644 index 0000000000..c956905571 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/agent.yaml @@ -0,0 +1,28 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/microsoft/AgentSchema/refs/heads/main/schemas/v1.0/ContainerAgent.yaml + +kind: hosted +name: foundry-multiagent +description: > + A multi-agent workflow featuring a Writer and Reviewer that collaborate + to create and refine content. +metadata: + authors: + - Microsoft + tags: + - Azure AI AgentServer + - Microsoft Agent Framework + - Multi-Agent Workflow + - Writer-Reviewer + - Content Creation +protocols: + - protocol: responses + version: v1 +environment_variables: + - name: PROJECT_ENDPOINT + value: ${PROJECT_ENDPOINT} + - name: MODEL_DEPLOYMENT_NAME + value: "{{chat}}" +resources: + - kind: model + id: gpt-4o-mini + name: chat diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/main.py b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/main.py new file mode 100644 index 0000000000..9171113c97 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/main.py @@ -0,0 +1,75 @@ +# Copyright (c) Microsoft. All rights reserved. + +import asyncio +import os +from contextlib import asynccontextmanager + +from agent_framework import WorkflowBuilder +from agent_framework.azure import AzureAIProjectAgentProvider +from azure.ai.agentserver.agentframework import from_agent_framework +from azure.identity.aio import AzureCliCredential, ManagedIdentityCredential +from dotenv import load_dotenv + +load_dotenv(override=True) + +# Configure these for your Foundry project +# Read the explicit variables present in the .env file +PROJECT_ENDPOINT = os.getenv( + "PROJECT_ENDPOINT" +) # e.g., "https://.services.ai.azure.com/api/projects/" +MODEL_DEPLOYMENT_NAME = os.getenv( + "MODEL_DEPLOYMENT_NAME", "gpt-4.1-mini" +) # Your model deployment name e.g., "gpt-4.1-mini" + + +def get_credential(): + """Will use Managed Identity when running in Azure, otherwise falls back to Azure CLI Credential.""" + return ( + ManagedIdentityCredential() + if os.getenv("MSI_ENDPOINT") + else AzureCliCredential() + ) + + +@asynccontextmanager +async def create_agents(): + async with ( + get_credential() as credential, + AzureAIProjectAgentProvider( + project_endpoint=PROJECT_ENDPOINT, + model=MODEL_DEPLOYMENT_NAME, + credential=credential, + ) as provider, + ): + writer = await provider.create_agent( + name="Writer", + instructions="You are an excellent content writer. You create new content and edit contents based on the feedback.", + ) + reviewer = await provider.create_agent( + name="Reviewer", + instructions="You are an excellent content reviewer. Provide actionable feedback to the writer about the provided content in the most concise manner possible.", + ) + yield writer, reviewer + + +def create_workflow(writer, reviewer): + workflow = WorkflowBuilder(start_executor=writer).add_edge(writer, reviewer).build() + return workflow.as_agent() + + +async def main() -> None: + """ + The writer and reviewer multi-agent workflow. + + Environment variables required: + - PROJECT_ENDPOINT: Your Microsoft Foundry project endpoint + - MODEL_DEPLOYMENT_NAME: Your Microsoft Foundry model deployment name + """ + + async with create_agents() as (writer, reviewer): + agent = create_workflow(writer, reviewer) + await from_agent_framework(agent).run_async() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/requirements.txt b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/requirements.txt new file mode 100644 index 0000000000..5ee6dd2eb5 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_multiagent/requirements.txt @@ -0,0 +1,2 @@ +azure-ai-agentserver-agentframework==1.0.0b16 +agent-framework-azure-ai diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/.dockerignore b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/.dockerignore new file mode 100644 index 0000000000..779bc67aae --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/.dockerignore @@ -0,0 +1,66 @@ +# Virtual environments +.venv/ +venv/ +env/ +.python-version + +# Environment files with secrets +.env +.env.* +*.local + +# Python build artifacts +__pycache__/ +*.py[cod] +*$py.class +*.so +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +*.egg-info/ +.installed.cfg +*.egg + +# Testing +.tox/ +.nox/ +.coverage +.coverage.* +htmlcov/ +.pytest_cache/ +.mypy_cache/ + +# IDE and OS files +.DS_Store +.idea/ +.vscode/ +*.swp +*.swo +*~ + +# Foundry config +.foundry/ +build-source-*/ + +# Git +.git/ +.gitignore + +# Docker +.dockerignore + +# Documentation +docs/ +*.md +!README.md +LICENSE diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/Dockerfile b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/Dockerfile new file mode 100644 index 0000000000..abec2e20e1 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/Dockerfile @@ -0,0 +1,13 @@ +FROM python:3.12-slim + +WORKDIR /app + +COPY requirements.txt . + +RUN pip install --no-cache-dir -r requirements.txt + +COPY main.py . + +EXPOSE 8088 + +CMD ["python", "main.py"] \ No newline at end of file diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/README.md b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/README.md new file mode 100644 index 0000000000..eaa06a2756 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/README.md @@ -0,0 +1,166 @@ +**IMPORTANT!** All samples and other resources made available in this GitHub repository ("samples") are designed to assist in accelerating development of agents, solutions, and agent workflows for various scenarios. Review all provided resources and carefully test output behavior in the context of your use case. AI responses may be inaccurate and AI actions should be monitored with human oversight. Learn more in the transparency documents for [Agent Service](https://learn.microsoft.com/en-us/azure/ai-foundry/responsible-ai/agents/transparency-note) and [Agent Framework](https://github.com/microsoft/agent-framework/blob/main/TRANSPARENCY_FAQ.md). + +Agents, solutions, or other output you create may be subject to legal and regulatory requirements, may require licenses, or may not be suitable for all industries, scenarios, or use cases. By using any sample, you are acknowledging that any output created using those samples are solely your responsibility, and that you will comply with all applicable laws, regulations, and relevant safety standards, terms of service, and codes of conduct. + +Third-party samples contained in this folder are subject to their own designated terms, and they have not been tested or verified by Microsoft or its affiliates. + +Microsoft has no responsibility to you or others with respect to any of these samples or any resulting output. + +# What this sample demonstrates + +This sample demonstrates a **key advantage of code-based hosted agents**: + +- **Local Python tool execution** - Run custom Python functions as agent tools + +Code-based agents can execute **any Python code** you write. This sample includes a Seattle Hotel Agent with a `get_available_hotels` tool that searches for available hotels based on check-in/check-out dates and budget preferences. + +The agent is hosted using the [Azure AI AgentServer SDK](https://pypi.org/project/azure-ai-agentserver-agentframework/) and can be deployed to Microsoft Foundry using the Azure Developer CLI. + +## How It Works + +### Local Tools Integration + +In [main.py](main.py), the agent uses a local Python function (`get_available_hotels`) that simulates a hotel availability API. This demonstrates how code-based agents can execute custom server-side logic that prompt agents cannot access. + +The tool accepts: +- **check_in_date** - Check-in date in YYYY-MM-DD format +- **check_out_date** - Check-out date in YYYY-MM-DD format +- **max_price** - Maximum price per night in USD (optional, defaults to $500) + +### Agent Hosting + +The agent is hosted using the [Azure AI AgentServer SDK](https://pypi.org/project/azure-ai-agentserver-agentframework/), +which provisions a REST API endpoint compatible with the OpenAI Responses protocol. + +### Agent Deployment + +The hosted agent can be deployed to Microsoft Foundry using the Azure Developer CLI [ai agent](https://learn.microsoft.com/en-us/azure/ai-foundry/agents/concepts/hosted-agents?view=foundry&tabs=cli#create-a-hosted-agent) extension. + +## Running the Agent Locally + +### Prerequisites + +Before running this sample, ensure you have: + +1. **Azure AI Foundry Project** + - Project created in [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-foundry?view=foundry#microsoft-foundry-portals) + - Chat model deployed (e.g., `gpt-4o` or `gpt-4.1`) + - Note your project endpoint URL and model deployment name + +2. **Azure CLI** + - Installed and authenticated + - Run `az login` and verify with `az account show` + +3. **Python 3.10 or higher** + - Verify your version: `python --version` + - If you have Python 3.9 or older, install a newer version: + - Windows: `winget install Python.Python.3.12` + - macOS: `brew install python@3.12` + - Linux: Use your package manager + +### Environment Variables + +Set the following environment variables (matching `agent.yaml`): + +- `PROJECT_ENDPOINT` - Your Azure AI Foundry project endpoint URL (required) +- `MODEL_DEPLOYMENT_NAME` - The deployment name for your chat model (defaults to `gpt-4.1-mini`) + +This sample loads environment variables from a local `.env` file if present. + +Create a `.env` file in this directory with the following content: + +``` +PROJECT_ENDPOINT=https://.services.ai.azure.com/api/projects/ +MODEL_DEPLOYMENT_NAME=gpt-4.1-mini +``` + +Or set them via PowerShell: + +```powershell +# Replace with your actual values +$env:PROJECT_ENDPOINT="https://.services.ai.azure.com/api/projects/" +$env:MODEL_DEPLOYMENT_NAME="gpt-4.1-mini" +``` + +### Setting Up a Virtual Environment + +It's recommended to use a virtual environment to isolate project dependencies: + +**macOS/Linux:** +```bash +python -m venv .venv +source .venv/bin/activate +``` + +**Windows (PowerShell):** +```powershell +python -m venv .venv +.\.venv\Scripts\Activate.ps1 +``` + +### Installing Dependencies + +Install the required Python dependencies using pip: + +```bash +pip install -r requirements.txt +``` + +The required packages are: +- `azure-ai-agentserver-agentframework` - Agent Framework and AgentServer SDK +- `python-dotenv` - Load environment variables from `.env` file +- `azure-identity` - Azure authentication +- `azure-monitor-opentelemetry-exporter` - Azure Monitor telemetry export +- `opentelemetry-sdk` / `opentelemetry-api` - OpenTelemetry for tracing + +### Running the Sample + +To run the agent, execute the following command in your terminal: + +```powershell +python main.py +``` + +This will start the hosted agent locally on `http://localhost:8088/`. + +### Interacting with the Agent + +**PowerShell (Windows):** +```powershell +$body = @{ + input = "I need a hotel in Seattle from 2025-03-15 to 2025-03-18, budget under $200 per night" + stream = $false +} | ConvertTo-Json + +Invoke-RestMethod -Uri http://localhost:8088/responses -Method Post -Body $body -ContentType "application/json" +``` + +**Bash/curl (Linux/macOS):** +```bash +curl -sS -H "Content-Type: application/json" -X POST http://localhost:8088/responses \ + -d '{"input": "Find me hotels in Seattle for March 20-23, 2025 under $200 per night","stream":false}' +``` + +The agent will use the `get_available_hotels` tool to search for available hotels matching your criteria. + +### Deploying the Agent to Microsoft Foundry + +To deploy your agent to Microsoft Foundry, follow the comprehensive deployment guide at https://learn.microsoft.com/en-us/azure/ai-foundry/agents/concepts/hosted-agents?view=foundry&tabs=cli + +## Troubleshooting + +### Images built on Apple Silicon or other ARM64 machines do not work on our service + +We **recommend using `azd` cloud build**, which always builds images with the correct architecture. + +If you choose to **build locally**, and your machine is **not `linux/amd64`** (for example, an Apple Silicon Mac), the image will **not be compatible with our service**, causing runtime failures. + +**Fix for local builds** + +Use this command to build the image locally: + +```shell +docker build --platform=linux/amd64 -t image . +``` + +This forces the image to be built for the required `amd64` architecture. \ No newline at end of file diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/agent.yaml b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/agent.yaml new file mode 100644 index 0000000000..d85d82ece6 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/agent.yaml @@ -0,0 +1,31 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/microsoft/AgentSchema/refs/heads/main/schemas/v1.0/ContainerAgent.yaml + +kind: hosted +name: foundry-singleagent +# Brief description of what this agent does +description: > + A travel assistant agent that helps users find hotels in Seattle. + Demonstrates local Python tool execution - a key advantage of code-based + hosted agents over prompt agents. +metadata: + # Categorization tags for organizing and discovering agents + authors: + - Microsoft + tags: + - Azure AI AgentServer + - Microsoft Agent Framework + - Local Tools + - Travel Assistant + - Hotel Search +protocols: + - protocol: responses + version: v1 +environment_variables: + - name: PROJECT_ENDPOINT + value: ${AZURE_AI_PROJECT_ENDPOINT} + - name: MODEL_DEPLOYMENT_NAME + value: "{{chat}}" +resources: + - kind: model + id: gpt-4.1-mini + name: chat diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/main.py b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/main.py new file mode 100644 index 0000000000..bbf3a48361 --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/main.py @@ -0,0 +1,160 @@ +# Copyright (c) Microsoft. All rights reserved. + +""" +Seattle Hotel Agent - A simple agent with a tool to find hotels in Seattle. +Uses Microsoft Agent Framework with Azure AI Foundry. +Ready for deployment to Foundry Hosted Agent service. +""" + +import asyncio +import os +from datetime import datetime +from typing import Annotated + +from agent_framework.azure import AzureAIProjectAgentProvider +from azure.ai.agentserver.agentframework import from_agent_framework +from azure.identity.aio import AzureCliCredential, ManagedIdentityCredential +from dotenv import load_dotenv + + +load_dotenv(override=True) + +# Configure these for your Foundry project +# Read the explicit variables present in the .env file +PROJECT_ENDPOINT = os.getenv( + "PROJECT_ENDPOINT" +) # e.g., "https://.services.ai.azure.com" +MODEL_DEPLOYMENT_NAME = os.getenv( + "MODEL_DEPLOYMENT_NAME", "gpt-4.1-mini" +) # Your model deployment name e.g., "gpt-4.1-mini" + + +# Simulated hotel data for Seattle +SEATTLE_HOTELS = [ + { + "name": "Contoso Suites", + "price_per_night": 189, + "rating": 4.5, + "location": "Downtown", + }, + { + "name": "Fabrikam Residences", + "price_per_night": 159, + "rating": 4.2, + "location": "Pike Place Market", + }, + { + "name": "Alpine Ski House", + "price_per_night": 249, + "rating": 4.7, + "location": "Seattle Center", + }, + { + "name": "Margie's Travel Lodge", + "price_per_night": 219, + "rating": 4.4, + "location": "Waterfront", + }, + { + "name": "Northwind Inn", + "price_per_night": 139, + "rating": 4.0, + "location": "Capitol Hill", + }, + { + "name": "Relecloud Hotel", + "price_per_night": 99, + "rating": 3.8, + "location": "University District", + }, +] + + +def get_available_hotels( + check_in_date: Annotated[str, "Check-in date in YYYY-MM-DD format"], + check_out_date: Annotated[str, "Check-out date in YYYY-MM-DD format"], + max_price: Annotated[int, "Maximum price per night in USD (optional)"] = 500, +) -> str: + """ + Get available hotels in Seattle for the specified dates. + This simulates a call to a fake hotel availability API. + """ + try: + # Parse dates + check_in = datetime.strptime(check_in_date, "%Y-%m-%d") + check_out = datetime.strptime(check_out_date, "%Y-%m-%d") + + # Validate dates + if check_out <= check_in: + return "Error: Check-out date must be after check-in date." + + nights = (check_out - check_in).days + + # Filter hotels by price + available_hotels = [ + hotel for hotel in SEATTLE_HOTELS if hotel["price_per_night"] <= max_price + ] + + if not available_hotels: + return ( + f"No hotels found in Seattle within your budget of ${max_price}/night." + ) + + # Build response + result = f"Available hotels in Seattle from {check_in_date} to {check_out_date} ({nights} nights):\n\n" + + for hotel in available_hotels: + total_cost = hotel["price_per_night"] * nights + result += f"**{hotel['name']}**\n" + result += f" Location: {hotel['location']}\n" + result += f" Rating: {hotel['rating']}/5\n" + result += f" ${hotel['price_per_night']}/night (Total: ${total_cost})\n\n" + + return result + + except ValueError as e: + return f"Error parsing dates. Please use YYYY-MM-DD format. Details: {str(e)}" + + +def get_credential(): + """Will use Managed Identity when running in Azure, otherwise falls back to Azure CLI Credential.""" + return ( + ManagedIdentityCredential() + if os.getenv("MSI_ENDPOINT") + else AzureCliCredential() + ) + + +async def main(): + """Main function to run the agent as a web server.""" + async with ( + get_credential() as credential, + AzureAIProjectAgentProvider( + project_endpoint=PROJECT_ENDPOINT, + model=MODEL_DEPLOYMENT_NAME, + credential=credential, + ) as provider, + ): + agent = await provider.create_agent( + name="SeattleHotelAgent", + instructions="""You are a helpful travel assistant specializing in finding hotels in Seattle, Washington. + +When a user asks about hotels in Seattle: +1. Ask for their check-in and check-out dates if not provided +2. Ask about their budget preferences if not mentioned +3. Use the get_available_hotels tool to find available options +4. Present the results in a friendly, informative way +5. Offer to help with additional questions about the hotels or Seattle + +Be conversational and helpful. If users ask about things outside of Seattle hotels, +politely let them know you specialize in Seattle hotel recommendations.""", + tools=[get_available_hotels], + ) + + print("Seattle Hotel Agent Server running on http://localhost:8088") + server = from_agent_framework(agent) + await server.run_async() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/requirements.txt b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/requirements.txt new file mode 100644 index 0000000000..a97eb7f13c --- /dev/null +++ b/python/samples/05-end-to-end/hosted_agents/foundry_singleagent/requirements.txt @@ -0,0 +1,9 @@ +azure-ai-agentserver-agentframework==1.0.0b16 +agent-framework-azure-ai +python-dotenv +azure-identity + +# Azure Monitor / OpenTelemetry +azure-monitor-opentelemetry-exporter>=1.0.0b46 +opentelemetry-sdk>=1.39.0 +opentelemetry-api>=1.39.0 \ No newline at end of file