Skip to content

Commit 51dab70

Browse files
authored
update runner (#132)
1 parent a7317dc commit 51dab70

File tree

174 files changed

+1931
-383
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

174 files changed

+1931
-383
lines changed

Taskfile.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ vars:
88
GOLANGCI_LINT_VERSION: v2.4.0
99
GOIMPORTS_VERSION: v0.29.0
1010
DPRINT_VERSION: 0.48.0
11-
EXAMPLE_VERSION: "0.5.1"
12-
RUNNER_VERSION: "0.5.0"
11+
EXAMPLE_VERSION: "0.6.0"
12+
RUNNER_VERSION: "0.6.0"
1313
VERSION: # if version is not passed we hack the semver by encoding the commit as pre-release
1414
sh: echo "${VERSION:-0.0.0-$(git rev-parse --short HEAD)}"
1515

debian/arduino-app-cli/home/arduino/.local/share/arduino-app-cli/assets/0.5.0/api-docs/arduino/app_bricks/cloud_llm/API.md

Lines changed: 0 additions & 107 deletions
This file was deleted.
Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ Stop real-time audio classification.
6767

6868
Terminates audio capture and releases any associated resources.
6969

70-
#### `classify_from_file(audio_path: str, confidence: int)`
70+
#### `classify_from_file(audio_path: str, confidence: float)`
7171

7272
Classify audio content from a WAV file.
7373

@@ -80,9 +80,8 @@ Supported sample widths:
8080
##### Parameters
8181

8282
- **audio_path** (*str*): Path to the `.wav` audio file to classify.
83-
- **confidence** (*int*) (optional): Confidence threshold (01). If None,
84-
the default confidence level specified during initialization
85-
will be applied.
83+
- **confidence** (*float*) (optional): Minimum confidence threshold (0.01.0) required
84+
for a detection to be considered valid. Defaults to 0.8 (80%).
8685

8786
##### Returns
8887

Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
# cloud_llm API Reference
2+
3+
## Index
4+
5+
- Class `CloudLLM`
6+
- Class `CloudModel`
7+
8+
---
9+
10+
## `CloudLLM` class
11+
12+
```python
13+
class CloudLLM(api_key: str, model: Union[str, CloudModel], system_prompt: str, temperature: Optional[float], timeout: int)
14+
```
15+
16+
A Brick for interacting with cloud-based Large Language Models (LLMs).
17+
18+
This class wraps LangChain functionality to provide a simplified, unified interface
19+
for chatting with models like Claude, GPT, and Gemini. It supports both synchronous
20+
'one-shot' responses and streaming output, with optional conversational memory.
21+
22+
### Parameters
23+
24+
- **api_key** (*str*): The API access key for the target LLM service. Defaults to the
25+
'API_KEY' environment variable.
26+
- **model** (*Union[str, CloudModel]*): The model identifier. Accepts a `CloudModel`
27+
enum member (e.g., `CloudModel.OPENAI_GPT`) or its corresponding raw string
28+
value (e.g., `'gpt-4o-mini'`). Defaults to `CloudModel.ANTHROPIC_CLAUDE`.
29+
- **system_prompt** (*str*): A system-level instruction that defines the AI's persona
30+
and constraints (e.g., "You are a helpful assistant"). Defaults to empty.
31+
- **temperature** (*Optional[float]*): The sampling temperature between 0.0 and 1.0.
32+
Higher values make output more random/creative; lower values make it more
33+
deterministic. Defaults to 0.7.
34+
- **timeout** (*int*): The maximum duration in seconds to wait for a response before
35+
timing out. Defaults to 30.
36+
37+
### Raises
38+
39+
- **ValueError**: If `api_key` is not provided (empty string).
40+
41+
### Methods
42+
43+
#### `with_memory(max_messages: int)`
44+
45+
Enables conversational memory for this instance.
46+
47+
Configures the Brick to retain a window of previous messages, allowing the
48+
AI to maintain context across multiple interactions.
49+
50+
##### Parameters
51+
52+
- **max_messages** (*int*): The maximum number of messages (user + AI) to keep
53+
in history. Older messages are discarded. Set to 0 to disable memory.
54+
Defaults to 10.
55+
56+
##### Returns
57+
58+
- (*CloudLLM*): The current instance, allowing for method chaining.
59+
60+
#### `chat(message: str)`
61+
62+
Sends a message to the AI and blocks until the complete response is received.
63+
64+
This method automatically manages conversation history if memory is enabled.
65+
66+
##### Parameters
67+
68+
- **message** (*str*): The input text prompt from the user.
69+
70+
##### Returns
71+
72+
- (*str*): The complete text response generated by the AI.
73+
74+
##### Raises
75+
76+
- **RuntimeError**: If the internal chain is not initialized or if the API request fails.
77+
78+
#### `chat_stream(message: str)`
79+
80+
Sends a message to the AI and yields response tokens as they are generated.
81+
82+
This allows for processing or displaying the response in real-time (streaming).
83+
The generation can be interrupted by calling `stop_stream()`.
84+
85+
##### Parameters
86+
87+
- **message** (*str*): The input text prompt from the user.
88+
89+
##### Returns
90+
91+
- (*str*): Chunks of text (tokens) from the AI response.
92+
93+
##### Raises
94+
95+
- **RuntimeError**: If the internal chain is not initialized or if the API request fails.
96+
- **AlreadyGenerating**: If a streaming session is already active.
97+
98+
#### `stop_stream()`
99+
100+
Signals the active streaming generation to stop.
101+
102+
This sets an internal flag that causes the `chat_stream` iterator to break
103+
early. It has no effect if no stream is currently running.
104+
105+
#### `clear_memory()`
106+
107+
Clears the conversational memory history.
108+
109+
Resets the stored context. This is useful for starting a new conversation
110+
topic without previous context interfering. Only applies if memory is enabled.
111+
112+
113+
---
114+
115+
## `CloudModel` class
116+
117+
```python
118+
class CloudModel()
119+
```
120+

0 commit comments

Comments
 (0)