-
Notifications
You must be signed in to change notification settings - Fork 10
docs: update bedtime story teller #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
stefanotorneo
merged 33 commits into
arduino:release-1.0
from
91volt:docs-bedtime-storyteller
Nov 27, 2025
Merged
Changes from all commits
Commits
Show all changes
33 commits
Select commit
Hold shift + click to select a range
134fa71
Init storyteller
Matteo-it 9c0a077
bedtime UI
Matteo-it f47674a
Update char container
Matteo-it 3a7197a
update storyteller UI
Matteo-it fd53766
fix license
stefanotorneo 9c3089d
Merge branch 'main' into Matteo-it/storyteller
stefanotorneo 36aa9fb
fix license
stefanotorneo f231367
Merge branch 'release-1.0' into Matteo-it/storyteller
stefanotorneo 491e8a5
fix license
stefanotorneo e166a62
fix
stefanotorneo 252bb81
docs: update bedtime story teller docs
91volt 064fd39
docs: update bedtime story teller with image placeholders
91volt 8b5c839
Update char container
Matteo-it df0ff5d
update UI
Matteo-it a0c5071
Feat: Make 'New story' button visible and clickable after story gener…
Matteo-it c3bf5a0
Feat: Implement full reset functionality for 'New story' button
Matteo-it 5eaaa15
Fix: Prevent background image resizing on content expansion
Matteo-it df446ef
Refactor: Remove test settings and functionality
Matteo-it 2744e15
feat: Add CloudModel import and specify GOOGLE_GEMINI model
Matteo-it 8614f44
refactor: Remove static system_prompt from CloudLLM
Matteo-it d713152
wip
Matteo-it e8dc9bb
prompt update
Matteo-it c01ff36
[PXCT-1310] Mascot Jump Game Example Addition (#7)
TaddyHC 1f33010
add images to bedtime storyteller example
91volt 170afb9
Merge branch 'release-1.0' into docs-bedtime-storyteller
91volt 8ead395
Update examples/bedtime-story-teller/app.yaml
91volt 90a2491
Update examples/bedtime-story-teller/app.yaml
91volt 5264c36
Fix UI nomenclature reference for launching app
91volt fd2e676
Update examples/bedtime-story-teller/app.yaml
91volt 32624d5
Update documentation assets
91volt 26e621d
Merge branch 'docs-bedtime-storyteller' of https://github.com/91volt/…
91volt c6306cc
Update readme content
91volt 4b24a0a
add models name to readme content
91volt File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,42 +1,175 @@ | ||
| # Bedtime Story Teller Example | ||
| # Bedtime Story Teller | ||
|
|
||
| The **Bedtime Story Teller** example demonstrates how to build a generative AI application using the Arduino UNO Q. It uses a Large Language Model (LLM) to create personalized bedtime stories based on user-selected parameters like age, theme, and characters, streaming the result in real-time to a web interface. | ||
|
|
||
|  | ||
|
|
||
| ## Description | ||
| This example demonstrates how to build a bedtime story teller application using Arduino UNO Q. | ||
| The application shows how to use a cloud-based large language model (LLM) to generate a bedtime story based on user input. | ||
|
|
||
| This App transforms the UNO Q into an AI storytelling assistant. It uses the `cloud_llm` Brick to connect to a cloud-based AI model and the `web_ui` Brick to provide a rich configuration interface. | ||
|
|
||
| The workflow allows you to craft a story by selecting specific parameters—such as the child's age, story theme, tone, and specific characters—or to let the App **generate a story randomly** for a quick surprise. The backend constructs a detailed prompt, sends it to the AI model, and streams the generated story back to the browser text-token by text-token. | ||
|
|
||
| ## Bricks Used | ||
|
|
||
| The code detector example uses the following bricks: | ||
| The bedtime story teller example uses the following Bricks: | ||
|
|
||
| - `cloud_llm`: brick to interact with a cloud-based large language model (LLM) for generating story content. | ||
| - `web_ui`: brick to create a web interface to get user input and display the generated story. | ||
| - `cloud_llm`: Brick to interact with cloud-based Large Language Models (LLMs) like Google Gemini, OpenAI GPT, or Anthropic Claude. | ||
| - `web_ui`: Brick to create the web interface for parameter input and story display. | ||
|
|
||
| ## Hardware and Software Requirements | ||
|
|
||
| ### Hardware | ||
|
|
||
| - Arduino UNO Q (x1) | ||
| - USB camera (x1) | ||
| - USB-C® to USB-A Cable (x1) | ||
| - Personal computer with internet access | ||
| - USB-C® cable (for power and programming) (x1) | ||
|
|
||
| ### Software | ||
|
|
||
| - Apps Lab IDE | ||
| - Arduino App Lab | ||
|
|
||
| Note: You can run this example using your Arduino UNO Q as a Single Board Computer (SBC) using a [USB-C hub](https://store.arduino.cc/products/usb-c-to-hdmi-multiport-adapter-with-ethernet-and-usb-hub) with a mouse, keyboard and display attached. | ||
| **Note:** This example requires an active internet connection to reach the AI provider's API. You will also need a valid **API Key** for the service used (e.g., Google AI Studio API Key). | ||
|
|
||
| ## How to Use the Example | ||
|
|
||
| 1. Run the app | ||
| 2. Open the App on your browser | ||
| This example requires a valid API Key from an LLM provider (Google Gemini, OpenAI GPT, or Anthropic Claude) and an internet connection. | ||
|
|
||
| ## How it Works | ||
| ### Configure & Launch App | ||
|
|
||
| 1. **Duplicate the Example** | ||
| Since built-in examples are read-only, you must duplicate this App to edit the configuration. Click the arrow next to the App name and select **Duplicate** or click the **Copy and edit app** button on the top right corner of the App page. | ||
|  | ||
|
|
||
| 2. **Open Brick Configuration** | ||
| On the App page, locate the **Bricks** section on the left. Click on the **Cloud LLM** Brick, then click the **Brick Configuration** button on the right side of the screen. | ||
|  | ||
|
|
||
| 3. **Add API Key** | ||
| In the configuration panel, enter your API Key into the corresponding field. This securely saves your credentials for the App to use. You can generate an API key from your preferred provider: | ||
| * **Google Gemini:** [Get API Key](https://aistudio.google.com/app/apikey) | ||
| * **OpenAI GPT:** [Get API Key](https://platform.openai.com/api-keys) | ||
| * **Anthropic Claude:** [Get API Key](https://console.anthropic.com/settings/keys) | ||
|
|
||
|  | ||
|
|
||
| Here is a brief explanation of the full-stack application: | ||
| 4. **Run the App** | ||
| Launch the App by clicking the **Run** button in the top right corner. Wait for the App to start. | ||
|  | ||
|
|
||
| ### 🔧 Backend (main.py) | ||
| 5. **Access the Web Interface** | ||
| Open the App in your browser at `<UNO-Q-IP-ADDRESS>:7000`. | ||
|
|
||
| ### 💻 Frontend (index.html + app.js) | ||
| ### Interacting with the App | ||
|
|
||
| 1. **Choose Your Path** | ||
| You have two options to create a story: | ||
| * **Option A: Manual Configuration** (Follow step 2) | ||
| * **Option B: Random Generation** (Skip to step 3) | ||
|
|
||
| 2. **Set Parameters (Manual)** | ||
| Use the interactive interface to configure the story details. The interface unlocks sections sequentially: | ||
| - **Age:** Select the target audience (3-5, 6-8, 9-12, 13-16 years, or Adult). | ||
| - **Theme:** Choose a genre (Fantasy/Adventure, Fairy Tale, Mystery/Horror, Science/Universe, Animals, or Comedy). | ||
| - **Story Type (Optional):** Fine-tune the narrative: | ||
| - *Tone:* e.g., Calm and sweet, Epic and adventurous, Tense and grotesque. | ||
| - *Ending:* e.g., Happy, With a moral, Open and mysterious. | ||
| - *Structure:* Classic, Chapter-based, or Episodic. | ||
| - *Duration:* Short (5 min), Medium (10-15 min), or Long (20+ min). | ||
| - **Characters:** You must add **at least one character** (max 5). Define their Name, Description, and Role (Protagonist, Antagonist, Positive/Negative Helper, or Other). | ||
| - **Generate:** Once ready, click the **Generate story** button. | ||
|
|
||
| 3. **Generate Randomly** | ||
| If you prefer a surprise, click the **Generate Randomly** button on the right side of the screen. The App will automatically select random options for age, theme, tone, and structure to create a unique story instantly. | ||
|
|
||
| 4. **Interact** | ||
| The story streams in real-time. Once complete, you can: | ||
| - **Copy** the text to your clipboard. | ||
| - Click **New story** to reset the interface and start over. | ||
|
|
||
| ## How it Works | ||
|
|
||
| Once the App is running, it performs the following operations: | ||
|
|
||
| - **User Input Collection**: The `web_ui` Brick serves an HTML page where users select story attributes via interactive "chips" and forms. | ||
| - **Prompt Engineering**: When the user requests a story, the Python backend receives a JSON object containing all parameters. It dynamically constructs a natural language prompt optimized for the LLM (e.g., "As a parent... I need a story about [Theme]..."). | ||
| - **AI Inference**: The `cloud_llm` Brick sends this prompt to the configured cloud provider using the API Key set in the Brick Configuration. | ||
| - **Stream Processing**: Instead of waiting for the full text, the backend receives the response in chunks (tokens) and forwards them immediately to the frontend via WebSockets, ensuring the user sees progress instantly. | ||
|
|
||
| ## Understanding the Code | ||
|
|
||
| ### 🔧 Backend (`main.py`) | ||
|
|
||
| The Python script handles the logic of connecting to the AI and managing the data flow. Note that the API Key is not hardcoded; it is retrieved automatically from the Brick configuration. | ||
|
|
||
| - **Initialization**: The `CloudLLM` is set up with a system prompt that enforces HTML formatting for the output. The `CloudModel` constants map to specific efficient model versions: | ||
| * `CloudModel.GOOGLE_GEMINI` → `gemini-2.5-flash` | ||
| * `CloudModel.OPENAI_GPT` → `gpt-4o-mini` | ||
| * `CloudModel.ANTHROPIC_CLAUDE` → `claude-3-7-sonnet-latest` | ||
|
|
||
| ```python | ||
| # The API Key is loaded automatically from the Brick Configuration | ||
| llm = CloudLLM( | ||
| model=CloudModel.GOOGLE_GEMINI, | ||
| system_prompt="You are a bedtime story teller. Your response must be the story itself, formatted directly in HTML..." | ||
| ) | ||
| llm.with_memory() | ||
| ``` | ||
|
|
||
| - **Prompt Construction**: The `generate_story` function translates the structured data from the UI into a descriptive text prompt for the AI. | ||
|
|
||
| ```python | ||
| def generate_story(_, data): | ||
| # Extract parameters | ||
| age = data.get('age', 'any') | ||
| theme = data.get('theme', 'any') | ||
|
|
||
| # Build natural language prompt | ||
| prompt_for_display = f"As a parent who loves to read bedtime stories to my <strong>{age}</strong> year old child..." | ||
|
|
||
| # ... logic to append characters and settings ... | ||
|
|
||
| # Stream response back to UI | ||
| prompt_for_llm = re.sub('<[^>]*>', '', prompt_for_display) # Clean tags for LLM | ||
| for resp in llm.chat_stream(prompt_for_llm): | ||
| ui.send_message("response", resp) | ||
|
|
||
| ui.send_message("stream_end", {}) | ||
| ``` | ||
|
|
||
| ### 🔧 Frontend (`app.js`) | ||
|
|
||
| The JavaScript manages the complex UI interactions, random generation logic, and WebSocket communication. | ||
|
|
||
| - **Random Generation**: If the user chooses "Generate Randomly", the frontend programmatically selects random chips from the available options and submits the request. | ||
|
|
||
| ```javascript | ||
| document.getElementById('generate-randomly-button').addEventListener('click', () => { | ||
| // Select random elements from the UI lists | ||
| const ageChips = document.querySelectorAll('.parameter-container:nth-child(1) .chip'); | ||
| const randomAgeChip = getRandomElement(ageChips); | ||
| // ... repeat for theme, tone, etc ... | ||
|
|
||
| const storyData = { | ||
| age: randomAgeChip ? randomAgeChip.textContent.trim() : 'any', | ||
| // ... | ||
| characters: [], // Random stories use generic characters | ||
| }; | ||
|
|
||
| generateStory(storyData); | ||
| }); | ||
| ``` | ||
|
|
||
| - **Socket Listeners**: The frontend listens for chunks of text and appends them to the display buffer, creating the streaming effect. | ||
|
|
||
| ```javascript | ||
| socket.on('response', (data) => { | ||
| document.getElementById('story-container').style.display = 'flex'; | ||
| storyBuffer += data; // Accumulate text | ||
| }); | ||
|
|
||
| socket.on('stream_end', () => { | ||
| const storyResponse = document.getElementById('story-response'); | ||
| storyResponse.innerHTML = storyBuffer; // Final render | ||
| document.getElementById('loading-spinner').style.display = 'none'; | ||
| }); | ||
| ``` |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+61.7 KB
examples/bedtime-story-teller/assets/docs_assets/brick-credentials.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.