Your AI-powered file launcher and search assistant. Think Spotlight or Alfred, but with the intelligence to understand what you're looking for. Press Option (⌥) + Space
anywhere to start searching!
- Download the latest release for your architecture:
- For Apple Silicon (M1/M2):
albert-launcher-{version}-mac-arm64.dmg
- For Intel:
albert-launcher-{version}-mac-x64.dmg
- For Apple Silicon (M1/M2):
- Open the DMG file and drag alBERT to your Applications folder
- Since the app is not signed with an Apple Developer certificate, you'll need to:
- Right-click (or Control-click) on alBERT in Applications
- Select "Open" from the context menu
- Click "Open" in the security dialog
- This is only required for the first launch
- Press
Option (⌥) + Space
anywhere to open alBERT!
- 🚀 Launch: Press
Option (⌥) + Space
anywhere to open alBERT - 🔍 Search: Just start typing - alBERT understands natural language
- 💡 Smart Results: Results are ranked by relevance to your query
- ⌨️ Navigate: Use arrow keys to move, Enter to open
- ⚡️ Quick Exit: Press Esc to close
Unlike traditional file search tools that rely on filename matching or basic content indexing, alBERT-launcher uses advanced semantic search and AI capabilities to understand the meaning behind your queries. It maintains a dedicated folder (~/alBERT
) where it indexes and searches through your important documents, providing:
- Semantic Search: Find documents based on meaning, not just keywords
- AI-Powered Answers: Get direct answers to questions about your documents
- Context-Aware Results: Results are ranked based on relevance to your query context
- Instant Access: Global shortcut (
Option (⌥) + Space
) to access from anywhere
graph TD
A[User Query] --> B[Query Processor]
B --> C{Query Type}
C -->|File Search| D[Local Search Engine]
C -->|Web Search| E[Brave Search API]
C -->|AI Question| F[Perplexity AI]
D --> G[Document Embeddings]
D --> H[File Index]
G --> I[Search Results]
H --> I
E --> I
F --> I
I --> J[Result Ranker]
J --> K[UI Display]
subgraph "Local Index"
H
G
end
subgraph "External Services"
E
F
end
graph LR
A[Electron Main Process] --> B[IPC Bridge]
B --> C[Renderer Process]
subgraph "Main Process"
A --> D[File Watcher]
A --> E[Search DB]
A --> F[Embeddings Service]
end
subgraph "Renderer Process"
C --> G[React UI]
G --> H[Search Bar]
G --> I[Results View]
G --> J[Settings Panel]
end
sequenceDiagram
participant U as User
participant UI as UI Layer
participant S as Search Engine
participant DB as Search DB
participant AI as AI Services
U->>UI: Enter Query
UI->>S: Process Query
S->>DB: Search Local Index
S->>AI: Get AI Answer
par Local Results
DB-->>S: Document Matches
and AI Response
AI-->>S: Generated Answer
end
S->>S: Rank & Merge Results
S->>UI: Display Results
UI->>U: Show Results
- 🚀 Lightning-fast local file search
- 🤖 AI-powered answers using Perplexity
- 🔍 Semantic search capabilities
- 🌐 Web search integration with Brave Search
- ⌨️ Global keyboard shortcuts (
Option (⌥) + Space
) - 💾 Smart caching system
- 🎯 Context-aware search results
- 📱 Modern, responsive UI
The ~/alBERT
folder is your personal knowledge base. Any files placed here are:
- Automatically indexed for semantic search
- Processed for quick retrieval
- Analyzed for contextual understanding
- Accessible through natural language queries
alBERT uses advanced embedding techniques to understand the meaning of your documents:
- Documents are split into meaningful chunks
- Each chunk is converted into a high-dimensional vector
- Queries are matched against these vectors for semantic similarity
- Results are ranked based on relevance and context
- Query Understanding: Natural language processing to understand user intent
- Context Awareness: Maintains conversation context for follow-up queries
- Smart Answers: Generates answers by combining local knowledge with AI capabilities
alBERT-launcher uses OpenRouter to access powerful language models for enhanced search capabilities:
graph TD
A[User Query] --> B[Query Analyzer]
B --> C{Query Type}
C -->|Direct Question| D[OpenRouter API]
C -->|Document Analysis| E[Local Processing]
D --> F[Perplexity/LLaMA Model]
F --> G[AI Response]
E --> H[Document Vectors]
H --> I[Semantic Search]
G --> J[Result Merger]
I --> J
J --> K[Final Response]
subgraph "OpenRouter Service"
D
F
end
subgraph "Local Processing"
E
H
I
end
- Model Selection: Uses Perplexity's LLaMA-3.1-Sonar-Small-128k model for optimal performance
- Context Integration: Combines AI responses with local document context
- Source Attribution: AI responses include relevant source URLs
- Streaming Responses: Real-time response streaming for better UX
- Fallback Handling: Graceful degradation when API is unavailable
Example OpenRouter configuration:
const openRouterConfig = {
model: "perplexity/llama-3.1-sonar-small-128k-online",
temperature: 0.7,
maxTokens: 500,
systemPrompt: "You are a search engine api that provides answers to questions with as many links to sources as possible."
}
alBERT-launcher puts your privacy first by supporting local AI processing through Ollama integration. Switch between cloud and local AI with a single click:
graph TD
A[Your Query] --> B{Privacy Mode}
B -->|Private| C[Local AI]
B -->|Public| D[Cloud AI]
C --> E[Private Results]
D --> F[Cloud Results]
- 🔒 Privacy Mode: Switch between local and cloud AI instantly
- 💻 Local Processing: Keep your data on your machine
- 🌐 Flexible Choice: Use cloud AI when you need more power
- ⚡ Fast Response: No internet latency in local mode
- 💰 Cost-Free: No API costs when using local models
- 🔌 Offline Support: Work without internet connection
- Install Ollama from ollama.ai
- Enable "Private Mode" in alBERT settings
- Start searching with complete privacy!
Choose from various powerful local models:
- Llama 2
- CodeLlama
- Mistral
- And more from Ollama's model library
- Complete Privacy: Your queries never leave your computer
- No API Costs: Use AI features without subscription fees
- Always Available: Work offline without interruption
- Full Control: Choose and customize your AI models
alBERT-launcher implements a sophisticated file system monitoring and indexing system:
graph TD
A[File System Events] --> B[Event Watcher]
B --> C{Event Type}
C -->|Create| D[Index New File]
C -->|Modify| E[Update Index]
C -->|Delete| F[Remove from Index]
D --> G[File Processor]
E --> G
G --> H[Content Extractor]
H --> I[Text Chunker]
I --> J[Vector Database]
subgraph "File Processing Pipeline"
G
H
I
end
subgraph "Search Index"
J
end
-
Automatic Monitoring
- Real-time file change detection
- Efficient delta updates
- Handles file moves and renames
- Supports symbolic links
-
Content Processing
// Example content processing pipeline async function processFile(filePath: string) { const content = await readContent(filePath) const chunks = splitIntoChunks(content) const vectors = await vectorizeChunks(chunks) await updateSearchIndex(filePath, vectors) }
-
Supported File Types
- Text files (.txt, .md, .json)
- Documents (.pdf, .doc, .docx)
- Code files (.js, .py, .ts, etc.)
- Configuration files (.yaml, .toml)
- And more...
-
Smart Indexing
- Incremental updates
- Content deduplication
- Metadata extraction
- File type detection
-
Search Capabilities
- Full-text search
- Fuzzy matching
- Regular expressions
- Metadata filters
The ~/alBERT
directory structure:
~/alBERT/
├── documents/ # General documents
├── notes/ # Quick notes and thoughts
├── code/ # Code snippets and examples
├── configuration/ # Config files and settings
└── .alBERT/ # Internal index and metadata
├── index/ # Search indices
├── vectors/ # Document vectors
├── cache/ # Query cache
└── metadata/ # File metadata
-
Indexing
- Batch processing for multiple files
- Parallel processing when possible
- Priority queue for important files
- Delayed processing for large files
-
Search
graph LR A[Query] --> B[Vector] B --> C{Search Type} C -->|ANN| D[Approximate Search] C -->|KNN| E[Exact Search] D --> F[Results] E --> F
-
File Monitoring
- Debounced file system events
- Coalescence of multiple events
- Selective monitoring based on file size
- Resource-aware processing
alBERT-launcher uses Weaviate Embedded as its vector database engine, providing efficient storage and retrieval of document embeddings:
graph TD
A[Document] --> B[Content Extractor]
B --> C[Text Chunks]
C --> D[Embedding Model]
D --> E[Vector Embeddings]
E --> F[Weaviate DB]
G[Search Query] --> H[Query Vectorizer]
H --> I[Query Vector]
I --> J[Vector Search]
F --> J
J --> K[Ranked Results]
subgraph "Embedding Pipeline"
B
C
D
E
end
subgraph "Vector Store"
F
end
subgraph "Search Pipeline"
H
I
J
end
-
Document Processing
interface WeaviateDocument { content: string path: string lastModified: number extension: string }
-
Schema Definition
const schema = { class: 'File', properties: [ { name: 'path', dataType: ['string'] }, { name: 'content', dataType: ['text'] }, { name: 'filename', dataType: ['string'] }, { name: 'extension', dataType: ['string'] }, { name: 'lastModified', dataType: ['number'] }, { name: 'hash', dataType: ['string'] } ], vectorizer: 'none' // Custom vectorization }
-
Worker-based Processing
- Dedicated worker threads for vectorization
- Parallel processing of document batches
- Automatic resource management
- Error handling and recovery
-
Batch Processing
// Example batch processing export const embed = async ( text: string | string[], batch_size: number = 15 ): Promise<number[] | number[][]> => { // Process in batches for optimal performance }
-
Reranking System
- Cross-encoder for accurate result ranking
- Contextual similarity scoring
- Optional document return with scores
-
Efficient Storage
- Incremental updates
- Document hashing for change detection
- Optimized vector storage
- Automatic garbage collection
-
Fast Retrieval
graph LR A[Query] --> B[Vector] B --> C{Search Type} C -->|ANN| D[Approximate Search] C -->|KNN| E[Exact Search] D --> F[Results] E --> F
-
Optimization Techniques
- Approximate Nearest Neighbor (ANN) search
- Vector quantization
- Dimension reduction
- Caching strategies
-
Hybrid Search
- Combined keyword and semantic search
- Weighted scoring system
- Metadata filtering
- Context-aware ranking
-
Vector Operations
interface RankResult { corpus_id: number score: number text?: string }
-
Quality Assurance
- Automated consistency checks
- Vector space analysis
- Performance monitoring
- Error detection
sequenceDiagram
participant App as Application
participant VDB as Vector DB
participant Worker as Worker Thread
participant Storage as File Storage
App->>VDB: Index Request
VDB->>Worker: Vectorize Content
Worker->>Worker: Process Batch
Worker-->>VDB: Return Vectors
VDB->>Storage: Store Vectors
Storage-->>VDB: Confirm Storage
VDB-->>App: Index Complete
App->>VDB: Search Request
VDB->>Worker: Vectorize Query
Worker-->>VDB: Query Vector
VDB->>Storage: Vector Search
Storage-->>VDB: Search Results
VDB-->>App: Ranked Results
- Node.js (v16 or higher)
- pnpm package manager
- Brave Search API key (optional)
- OpenRouter API key (optional)
# Start the development server
pnpm dev
# For macOS
pnpm build:mac
# For Windows
pnpm build:win
# For Linux
pnpm build:linux
Create a .env
file in the root directory with the following variables:
BRAVE_API_KEY=your_brave_api_key
OPENROUTER_API_KEY=your_openrouter_api_key
alBERT-launcher/
├── src/
│ ├── main/ # Electron main process
│ │ ├── api.ts # tRPC API endpoints
│ │ ├── db.ts # Search database management
│ │ ├── embeddings.ts # Text embedding functionality
│ │ └── utils/ # Utility functions
│ ├── renderer/ # React frontend
│ │ ├── components/ # UI components
│ │ ├── lib/ # Utility functions
│ │ └── App.tsx # Main application component
│ └── preload/ # Electron preload scripts
├── public/ # Static assets
└── electron-builder.json5 # Build configuration
The search API supports various query types:
- Basic text search
- Semantic search
- Natural language questions
- File metadata queries
Example queries:
"find documents about react hooks"
"what are the key points from my meeting notes?"
"show me python files modified last week"
alBERT automatically monitors the ~/alBERT
folder for:
- New files
- File modifications
- File deletions
- File moves
Changes are automatically indexed and available for search immediately.
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter any issues or have questions, please file an issue on our GitHub Issues page.