This project shows how to build durable AI agents using four production-grade components:
- OpenAI Agents SDK — agent runtime and coordination
- Temporal Python SDK — durable workflows, retries, and long-running tasks
- Model Context Protocol (MCP) — standard interface for tools and data sources
- (Optional) Pydantic Logfire — unified observability (logs, traces, metrics) with native LLM & agent instrumentation
If you’re tired of debugging Celery tasks, running into scalability limits, or wrestling with LangGraph dependency issues, this tutorial is for you.
What we mean by “durable agents”
Agents whose steps are persisted and replayable, with built-in retries, timeouts, and idempotency—so they survive crashes, restarts, and long-running work.
uv sync
source .venv/bin/activate
In this example, we package the prompts of the financial analyst example in an MCP server.
uv run uvicorn examples.mcp_server.main:app --reload --port 9000
To inspect and interact with the server, run:
mcp dev examples/mcp_server/financial_research_server.py
This command shows the available tools, schemas, and prompt interfaces exposed by the MCP server.
# 3. Start a Temporal server Execute the following commands to start a pre-built image along with all the dependencies.brew install temporal
temporal server start-devExecute the following command to start a worker to run the examples. Ensure that all your environment variables reside at file .env.
export PYTHONPATH=.
uv run --env-file .env examples/financial_research_agent/temporal/worker.pyexport PYTHONPATH=.
uv run --env-file .env examples/financial_research_agent/main.pyThis example is inspired by these two great examples:
Readers are invited to visit this Medium article for more details.


