Skip to content

Roy3838/Observer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ‘οΈ Observer AI

Local Micro-Agents That Observe, Log and React

Build powerful micro-agents that observe your digital world, remember what matters, and react intelligentlyβ€”all while keeping your data 100% private and secure.

Observer App Online Download App Support Project

GitHub Pages License


πŸ‘οΈ Observer Agents See

Screen
Screen
OCR & Vision
Camera
Camera
Visual Input
Audio
Audio
Mic + Computer
Memory
Memory
Text + Image

🎯 What Observer AI Does Best

πŸ“Š Intelligent Logging

🧠 Text & Visual Memory Store and retrieve images or text intelligently.

πŸŽ₯ Smart Screen Recording Start recording when something happens or analyze clips with custom labels

πŸ’Ύ Persistent Context Agents remember what matters across sessions

🚨 Powerful Notifications

πŸ“§ Email β€’ πŸ’¬ Discord β€’ πŸ“± Telegram πŸ“ž SMS β€’ πŸ’š WhatsApp

πŸ–₯️ System Alerts Native OS notifications and pop-ups

πŸ“Ί Observer Overlay Custom on-screen messages

Build local agents that don't just watchβ€”they understand, remember, and act intelligently on your behalf.

πŸ“Ή Why Observer AI?

Observer AI Demo

πŸ—οΈ Building Your Own Agent

Creating your own Observer AI agent is simple, and consist of three things:

  • SENSORS - input that your model will have
  • MODELS - models run by ollama or by Ob-Server
  • TOOLS - functions for your model to use

Quick Start

  1. Navigate to the Agent Dashboard and click "Create New Agent"
  2. Fill in the "Configuration" tab with basic details (name, description, model, loop interval)
  3. Give your model a system prompt and Sensors! The current Sensors that exist are:
    • Screen OCR ($SCREEN_OCR) Captures screen content as text via OCR
    • Screenshot ($SCREEN_64) Captures screen as an image for multimodal models
    • Agent Memory ($MEMORY@agent_id) Accesses agents' stored information
    • Agent Image Memory ($IMEMORY@agent_id) Accesses agents' stored images
    • Clipboard ($CLIPBOARD) It pastes the clipboard contents
    • Microphone* ($MICROPHONE) Captures the microphone and adds a transcription
    • Screen Audio* ($SCREEN_AUDIO) Captures the audio transcription of screen sharing a tab.
    • All audio* ($ALL_AUDIO) Mixes the microphone and screen audio and provides a complete transcription of both (used for meetings).

* Uses a whisper model with transformers.js

Agent Tools:

  • getMemory(agentId)* – Retrieve stored memory
  • setMemory(agentId, content)* – Replace stored memory
  • appendMemory(agentId, content)* – Add to existing memory
  • getImageMemory(agentId)* - Retrieve images stored in memory
  • setImageMemory(agentId, images) - Set images to memory
  • appendImageMemory(agentId, images) - Add images to memory
  • startAgent(agentId)* – Starts an agent
  • stopAgent(agentId)* – Stops an agent
  • time() - Gets current time
  • sleep(ms) - Waits that ammount of miliseconds

Notification Tools:

  • sendEmail(email, message, images?) - Sends an email
  • sendPushover(user_token, message, images?, title?) - Sends a pushover notification.
  • sendDiscord(discord_webhook, message, images?)Sends a discord message to a server.
  • sendTelegram(chat_id, message, images?) Sends a telegram message with the Observer bot. Get the chat_id messaging the bot @observer_notification_bot.
  • sendWhatsapp(phone_number, message) - Sends a whatsapp message with the Observer bot. Send a message first to +1 (555)783-4727 to use.
  • notify(title, options) – Send browser notification ⚠️IMPORTANT: Some browsers block notifications
  • sendSms(phone_number, message, images?) - Sends an SMS to a phone number, format as e.g. sendSms("hello",+181429367"). ⚠️IMPORTANT : Due to A2P policy, some SMS messages are being blocked, not recommended for US/Canada.

Video Recording Tools:

  • startClip() - Starts a recording of any video media and saves it to the recording Tab.
  • stopClip() - Stops an active recording
  • markClip(label) - Adds a label to any active recording that will be displayed in the recording Tab.

App Tools:

  • ask(question, title="Confirmation") - Pops up a system confirmation dialog
  • message(message, title="Agent Message") - Pops up a system message
  • system_notify(body, title="Observer AI") - Sends a system notification
  • overlay(body) - Pushes a message to the overlay
  • click() - Triggers a mouse click at the current cursor position ⚠️IMPORTANT: Position mouse before agent runs

Code Tab

The "Code" tab receives the following variables as context before running: response - The model's response agentId - The id of the agent running the code screen - The screen as base64 if captured camera - The camera as base64 if captured imemory - The agent's current image array images - All images in context

JavaScript agents run in the browser sandbox, making them ideal for passive monitoring and notifications:

// Remove Think tags for deepseek model
const cleanedResponse = response.replace(/<think>[\s\S]*?<\/think>/g, '').trim();

// Get time
const time = time();

// Update memory with timestamp
appendMemory(`[${time}] ${cleanedResponse}`);

// Send to Telegram for notekeeping
sendTelegram(cleanedResponse, "12345678") // Example chat_id

Note: any function marked with * takes an agentId argument.
If you omit agentId, it defaults to the agent that’s running the code.

πŸš€ Getting Started with Local Inference

There are a few ways to get Observer up and running with local inference. I recommend the Observer App.

Option 1: Just Install the Desktop App with any OpenAI compatible endpoint (Ollama, llama.cpp, vLLM)

Download App

Download Ollama for the best compatibility.

Observer can connect directly to any server that provides a v1/chat/completions endpoint. Set the Custom Model Server URL on the App to vLLM, llama.cpp or any OpenAI compatible endpoint if not using Ollama.

Option 2: Full Docker Setup (Deprecated)

nuevo_compressed.mp4

✨ Major Update: Simpler Setup & More Flexibility! The observer-ollama service no longer requires SSL by default. This means no more browser security warnings for a standard local setup! It now also supports any backend that uses a standard OpenAI-compatible (v1/chat/completions) endpoint, like Llama.cpp.

This method uses Docker Compose to run everything you need in containers: the Observer WebApp, the observer-ollama translator, and a local Ollama instance. This is the easiest way to get a 100% private, local-first setup.

Prerequisites:

Instructions:

  1. Clone the repository and start the services:

    git clone https://github.com/Roy3838/Observer.git
    cd Observer/docker
    docker-compose up --build
  2. Access the Local WebApp:

    • Open your browser to http://localhost:8080. This is your self-hosted version of the Observer app.
  3. Connect to your Ollama service:

    • In the app's header/settings, set the Model Server Address to http://localhost:3838. This is the observer-ollama translator that runs in a container and communicates with Ollama for you.
  4. Pull Ollama Models:

    • Navigate to the "Models" tab and click "Add Model". This opens a terminal to your Ollama instance.
    • Pull any model you need, for example:
      ollama run gemma3:4b # <- highly recommended model!

For NVIDIA GPUs: it's recommended to edit docker/docker-compose.yml and explicitly add gpu runtime to the ollama docker container. Add these to the ollama section of docker/docker-compose.yml:

    volumes:
      - ollama_data:/root/.ollama
    # ADD THIS SECTION
    runtime: nvidia
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    # UP TO HERE
    ports:
      - "11434:11434"

To Stop the Docker Setup:

cd docker && docker-compose down

βš™οΈ Configuration (Docker)

To customize your setup (e.g., enable SSL to access from app.observer-ai.com, disabling docker exec feature), simply edit the environment: section in your docker/docker-compose.yml file. All options are explained with comments directly in the file.

Setting Up Python (Jupyter Server)

Python agents run on a Jupyter server with system-level access, enabling them to interact directly with your computer:

#python <-- don't remove this!
print("Hello World!", response, agentId)

# Example: Analyze screen content and take action
if "SHUTOFF" in response:
    # System level commands can be executed here
    import os
    # os.system("command")  # Be careful with system commands!

Jupyter Server Configuration

To use Python agents:

  1. Run a Jupyter server on your machine with c.ServerApp.allow_origin = '*'
  2. Configure the connection in the Observer AI interface:
    • Host: The server address (e.g., 127.0.0.1)
    • Port: The server port (e.g., 8888)
    • Token: Your Jupyter server authentication token
  3. Test the connection using the "Test Connection" button
  4. Switch to the Python tab in the code editor to write Python-based agents

Deploy & Share

Save your agent, test it from the dashboard, and export the configuration to share with others!

🀝 Contributing

We welcome contributions from the community! Here's how you can help:

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'feat: add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ”— Links

πŸ“§ Contact


Built with ❀️ by Roy Medina for the Observer AI Community Special thanks to the Ollama team for being an awesome backbone to this project!