Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
4cee25d
Add native Ollama LLM support
ayman3000 Nov 16, 2025
fe1ee91
Fix Ollama integration: add model_version, usage metadata, safe JSON …
ayman3000 Nov 16, 2025
3a786dd
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 17, 2025
28ca391
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 17, 2025
95e601c
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 18, 2025
a40e261
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 18, 2025
040e253
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 18, 2025
a79433c
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 18, 2025
2288461
Fix formatting and imports for CI
ayman3000 Nov 19, 2025
b909b56
Fix formatting and imports for CI
ayman3000 Nov 19, 2025
b20848c
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
4e8d9f4
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
f0a7138
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
d5fca86
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
74c8d72
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
f97d4bc
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 19, 2025
e2ab4b2
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 20, 2025
b9c11e5
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 20, 2025
0aa1f9f
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 21, 2025
f0b3f98
Fix hello_world_ollama_native/agent.py formatting
ayman3000 Nov 21, 2025
d646742
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 21, 2025
e4e33df
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 21, 2025
5b20acf
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 21, 2025
92a8b2a
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 22, 2025
87de44c
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 22, 2025
9318689
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 22, 2025
8bd865b
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 25, 2025
5d6688a
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 25, 2025
e486fe9
Merge branch 'main' into feature/ollama-llm
ayman3000 Nov 25, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
99 changes: 99 additions & 0 deletions contributing/samples/hello_world_ollama_native/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Using Ollama Models with ADK (Native Integration)

## Model Choice

If your agent uses tools, choose an Ollama model that supports **function calling**.
Tool support can be verified with:

```bash
ollama show mistral-small3.1
```
Model
architecture mistral3
parameters 24.0B
context length 131072
embedding length 5120
quantization Q4_K_M

Capabilities
completion
vision
tools

Models must list tools under Capabilities.
Models without tool support will not execute ADK functions correctly.

To inspect or customize a model template:
```bash
ollama show --modelfile llama3.1 > model_file_to_modify
```
Then create a modified model:

ollama create llama3.1-modified -f model_file_to_modify


## Native Ollama Provider in ADK

ADK includes a native Ollama model class that communicates directly with the Ollama server at:

http://localhost:11434/api/chat

No LiteLLM provider, API keys, or OpenAI proxy endpoints are needed.

### Example agent
```python
import random
from google.adk.agents.llm_agent import Agent
from google.adk.models.ollama import Ollama

def roll_die(sides: int) -> int:
return random.randint(1, sides)

def check_prime(numbers: list[int]) -> str:
primes = []
for number in numbers:
number = int(number)
if number <= 1:
continue
for i in range(2, int(number ** 0.5) + 1):
if number % i == 0:
break
else:
primes.append(number)
return "No prime numbers found." if not primes else f"{', '.join(map(str, primes))} are prime numbers."

root_agent = Agent(
model=Ollama(model="llama3.1"),
name="dice_agent",
description="Agent that rolls dice and checks primes using native Ollama.",
instruction="Always use the provided tools.",
tools=[roll_die, check_prime],
)
```
## Connecting to a remote Ollama server

Default Ollama endpoint:

http://localhost:11434

Override using an environment variable:
```bash
export OLLAMA_API_BASE="http://192.168.1.20:11434"
```
Or pass explicitly in code:
```python
Ollama(model="llama3.1", host="http://192.168.1.20:11434")
```


## Running the Example with ADK Web

Start the ADK Web UI:

adk web hello_ollama_native

The interface will be available in your browser, allowing interactive testing of tool calls.




15 changes: 15 additions & 0 deletions contributing/samples/hello_world_ollama_native/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from . import agent
89 changes: 89 additions & 0 deletions contributing/samples/hello_world_ollama_native/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import random

from google.adk.agents.llm_agent import Agent
from google.adk.models.ollama_llm import Ollama


def roll_die(sides: int) -> int:
"""Roll a die and return the rolled result.

Args:
sides: The integer number of sides the die has.

Returns:
An integer of the result of rolling the die.
"""
return random.randint(1, sides)


def check_prime(numbers: list[int]) -> str:
"""Check if a given list of numbers are prime.

Args:
numbers: The list of numbers to check.

Returns:
A str indicating which number is prime.
"""
primes = set()
for number in numbers:
number = int(number)
if number <= 1:
continue
is_prime = True
for i in range(2, int(number**0.5) + 1):
if number % i == 0:
is_prime = False
break
if is_prime:
primes.add(number)
return (
"No prime numbers found."
if not primes
else f"{', '.join(str(num) for num in primes)} are prime numbers."
)


root_agent = Agent(
model=Ollama(model="llama3.1"),
name="dice_roll_agent",
description=(
"hello world agent that can roll a dice of any number of sides and"
" check prime numbers."
),
instruction="""
You roll dice and answer questions about the outcome of the dice rolls.
You can roll dice of different sizes.
You can use multiple tools in parallel by calling functions in parallel(in one request and in one round).
It is ok to discuss previous dice roles, and comment on the dice rolls.
When you are asked to roll a die, you must call the roll_die tool with the number of sides. Be sure to pass in an integer. Do not pass in a string.
You should never roll a die on your own.
When checking prime numbers, call the check_prime tool with a list of integers. Be sure to pass in a list of integers. You should never pass in a string.
You should not check prime numbers before calling the tool.
When you are asked to roll a die and check prime numbers, you should always make the following two function calls:
1. You should first call the roll_die tool to get a roll. Wait for the function response before calling the check_prime tool.
2. After you get the function response from roll_die tool, you should call the check_prime tool with the roll_die result.
2.1 If user asks you to check primes based on previous rolls, make sure you include the previous rolls in the list.
3. When you respond, you must include the roll_die result from step 1.
You should always perform the previous 3 steps when asking for a roll and checking prime numbers.
You should not rely on the previous history on prime results.
""",
tools=[
roll_die,
check_prime,
],
)
77 changes: 77 additions & 0 deletions contributing/samples/hello_world_ollama_native/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import asyncio
import time
import warnings

import agent
from dotenv import load_dotenv
from google.adk import Runner
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService
from google.adk.cli.utils import logs
from google.adk.sessions.in_memory_session_service import InMemorySessionService
from google.adk.sessions.session import Session
from google.genai import types

load_dotenv(override=True)
warnings.filterwarnings('ignore', category=UserWarning)
logs.log_to_tmp_folder()


async def main():
app_name = 'my_app'
user_id_1 = 'user1'
session_service = InMemorySessionService()
artifact_service = InMemoryArtifactService()
runner = Runner(
app_name=app_name,
agent=agent.root_agent,
artifact_service=artifact_service,
session_service=session_service,
)
session_11 = await session_service.create_session(
app_name=app_name, user_id=user_id_1
)

async def run_prompt(session: Session, new_message: str):
content = types.Content(
role='user', parts=[types.Part.from_text(text=new_message)]
)
print('** User says:', content.model_dump(exclude_none=True))
async for event in runner.run_async(
user_id=user_id_1,
session_id=session.id,
new_message=content,
):
if event.content.parts and event.content.parts[0].text:
print(f'** {event.author}: {event.content.parts[0].text}')

start_time = time.time()
print('Start time:', start_time)
print('------------------------------------')
await run_prompt(session_11, 'Hi, introduce yourself.')
await run_prompt(
session_11, 'Roll a die with 100 sides and check if it is prime'
)
await run_prompt(session_11, 'Roll it again.')
await run_prompt(session_11, 'What numbers did I get?')
end_time = time.time()
print('------------------------------------')
print('End time:', end_time)
print('Total time:', end_time - start_time)


if __name__ == '__main__':
asyncio.run(main())
Loading