A collection of tools for Open WebUI that provides structured planning and execution capability, arXiv paper search capabilities, Hugging Face text-to-image generation functionality, prompt enhancement, and multi-model conversations. Perfect for enhancing your LLM interactions with academic research, image generation, and advanced conversation management!
Search arXiv.org for relevant academic papers on any topic. No API key required!
Features:
- Search across paper titles, abstracts, and full text
- Returns detailed paper information including:
- Title
- Authors
- Publication date
- URL
- Abstract
- Automatically sorts by most recent submissions
- Returns up to 5 most relevant papers
Generate high-quality images from text descriptions using Hugging Face's Stable Diffusion models.
Features:
- Multiple image format options:
- Default/Square (1024x1024)
- Landscape (1024x768)
- Landscape Large (1440x1024)
- Portrait (768x1024)
- Portrait Large (1024x1440)
- Customizable model endpoint
- High-resolution output
This powerful agent allows you to define a goal, and it will autonomously generate and execute a plan to achieve it. The Planner is a generalist agent, capable of handling any text-based task, making it ideal for complex, multi-step requests that would typically require multiple prompts and manual intervention.
It features advanced capabilities like:
- Automatic Plan Generation: Breaks down your goal into a sequence of actionable steps with defined dependencies.
- Adaptive Execution: Executes each step, dynamically adjusting to the results of previous actions.
- LLM-Powered Consolidation: Intelligently merges the outputs of different steps into a coherent final result.
- Reflection and Refinement: Analyzes the output of each step, identifies potential issues, and iteratively refines the output through multiple attempts.
- Robust Error Handling: Includes retries and fallback mechanisms to ensure successful execution even with occasional API errors.
- Detailed Execution Summary: Provides a comprehensive report of the plan execution, including timings and potential issues.
Features:
- General Purpose: Can handle a wide range of text-based tasks, from creative writing and code generation to research summarization and problem-solving.
- Multi-Step Task Management: Excels at managing complex tasks that require multiple steps and dependencies.
- Context Awareness: Maintains context throughout the execution process, ensuring that each step builds upon the previous ones.
- Output Optimization: Employs a reflection mechanism to analyze and improve the output of each step through multiple iterations.
Search arXiv.org for relevant academic papers on any topic. No API key required!
Features:
- Comprehensive Search: Searches across paper titles, abstracts, and full text content from both arXiv and the web using Tavily.
- MCTS-Driven Refinement: Employs a Monte Carlo Tree Search (MCTS) approach to iteratively refine a research summary on a given topic.
- Adaptive Temperature Control: Offers both static and dynamic temperature decay settings. Static decay progressively reduces the LLM's temperature with each level of the search tree. Dynamic decay adjusts the temperature based on both depth and parent node scores, allowing the LLM to explore more diverse options when previous results are less promising. This fine-grained control balances exploration and exploitation for optimal refinement.
- Visual Tree Representation: Provides a visual representation of the search tree, offering intuitive feedback on the exploration process and the relationships between different research directions.
- Transparent Intermediate Steps: Shows intermediate steps of the search, allowing users to track the evolution of the research summary and understand the reasoning behind the refinements.
- Configurable Search Scope: Allows users to configure the breadth and depth of the search (tree width and depth) to control the exploration scope and computational resources used.
This pipe allows you to simulate conversations between multiple language models, each acting as a distinct character. You can configure up to 5 participants, each with their own model, alias, and character description (system message). This enables complex and dynamic interactions, perfect for storytelling, roleplaying, or exploring different perspectives on a topic.
Features:
- Multiple Participants: Simulate conversations with up to 5 different language models.
- Character Definition: Craft unique personas for each participant using system messages.
- Round-Robin Turns: Control the flow of conversation with configurable rounds per user message.
- Group-Chat-Manager: Use an LLM model to select the next participant in the conversation. (toggleable in valves)
- Streaming Support: See the conversation unfold in real-time with streaming output.
Analyze resumes and provide tags, first impressions, adversarial analysis, potential interview questions, and career advice.
Features:
- Resume Analysis: Breaks down a resume into relevant categories, highlighting strengths and weaknesses.
- Tags Generation: Identifies key skills and experience from the resume and assigns relevant tags.
- First Impression: Provides an initial assessment of the resume's effectiveness in showcasing the candidate's qualifications for a target role.
- Adversarial Analysis: Compares the analyzed resume to similar ones, offering actionable feedback on areas for improvement.
- Interview Questions: Suggests insightful questions tailored to the candidate's experience and the target role.
- Career Advisor Response: Offers personalized career advice based on the resume analysis and conversation history.
This filter uses an LLM to automatically improve the quality of your prompts before they are sent to the main language model. It analyzes your prompt and the conversation history to create a more detailed, specific, and effective prompt, leading to better responses.
Features:
- Context-Aware Enhancement: Considers the entire conversation history when refining the prompt.
- Customizable Template: Control the behavior of the prompt enhancer with a customizable template.
- Improved Response Quality: Get more relevant and insightful responses from the main LLM.
1. Installing from Haervwe's Open WebUI Hub (Recommended):
-
Visit https://openwebui.com/u/haervwe to access the collection of tools.
-
For Tools (arXiv Search Tool, Hugging Face Image Generator):
- Locate the desired tool on the hub page.
- Click the "Get" button next to the tool. This will redirect you to your Open WebUI instance and automatically populate the installation code.
- (Optional) Review the code, provide a name and description (if needed),
- Save the tool.
-
For Function Pipes (Planner Agent, arXiv Research MCTS Pipe, Multi Model Conversations) and Filters (Prompt Enhancer):
- Locate the desired function pipe or filter on the hub page.
- Click the "Get" button. This will, again, redirect you to your Open WebUI instance with the installation code.
- (Optional) Review the code, provide a different name and description,
- Save the function.
2. Manual Installation from the Open WebUI Interface:
-
For Tools (arXiv Search Tool, Hugging Face Image Generator):
- In your Open WebUI instance, navigate to the "Workspace" tab, then the "Tools" section.
- Click the "+" button.
- Copy the entire code of the respective
.py
file from this repository. - Paste the code into the text area in the Open WebUI interface.
- Provide a name and description , and save the tool.
-
For Function Pipes (Planner Agent, arXiv Research MCTS Pipe, Multi Model Conversations) and Filters (Prompt Enhancer):
- Navigate to the "Workspace" tab, then the "Functions" section.
- Click the "+" button.
- Copy and paste the code from the corresponding
.py
file. - Provide a name and description, and save.
Important Note for the Prompt Enhancer Filter: * To use the Prompt Enhancer, you must create a new model configuration in Open WebUI. * Go to "Workspace" -> "Models" -> "+". * Select a base model. * In the "Filters" section of the model configuration, enable the "Prompt Enhancer" filter.
- Model: the model id from your llm provider conected to Open-WebUI
- Action-Model: the model to be used in the task execution , leave as default to use the same in all the process.
- Concurrency: ("Concurrency support is currently experimental. Due to resource limitations, comprehensive testing of concurrent LLM operations has not been possible. Users may experience unexpected behavior when running multiple LLM processes simultaneously. Further testing and optimization are planned.")
- Max retries: Number of times the refelction step and subsequent refinement can happen per step.
No configuration required! The tool works out of the box.
- Model: The model ID from your LLM provider connected to Open WebUI.
- Tavily API Key: Required. Obtain your API key from tavily.com. This is used for web searches.
- Max Web Search Results: The number of web search results to fetch per query.
- Max arXiv Results: The number of results to fetch from the arXiv API per query.
- Tree Breadth: The number of child nodes explored during each iteration of the MCTS algorithm. This controls the width of the search tree.
- Tree Depth: The number of iterations of the MCTS algorithm. This controls the depth of the search tree.
- Exploration Weight: A constant (recommended range 0-2) controlling the balance between exploration and exploitation. Higher values encourage exploration of new branches, while lower values favor exploitation of promising paths.
- Temperature Decay: Exponentially decreases the LLM's temperature parameter with increasing tree depth. This focuses the LLM's output from creative exploration to refinement as the search progresses.
- Dynamic Temperature Adjustment: Provides finer-grained control over temperature decay based on parent node scores. If a parent node has a low score, the temperature is increased for its children, encouraging more diverse outputs and potentially uncovering better paths.
- Maximum Temperature: The initial temperature of the LLM (0-2, default 1.4). Higher temperatures encourage more diverse and creative outputs at the beginning of the search.
- Minimum Temperature: The final temperature of the LLM at maximum tree depth (0-2, default 0.5). Lower temperatures promote focused refinement of promising branches.
-
Number of Participants: Set the number of participants (1-5).
-
Rounds per User Message: Configure how many rounds of replies occur before the user can send another message.
-
Participant [1-5] Model: Select the model for each participant.
-
Participant [1-5] Alias: Set a display name for each participant.
-
Participant [1-5] System Message: Define the persona and instructions for each participant.
-
All Participants Appended Message: A global instruction appended to each participant's prompt.
-
Temperature, Top_k, Top_p: Standard model parameters.
-
**(note, the valves for the characters that wont be used must be setted to default or have correct paramenters)
- Model: The model ID from your LLM provider connected to Open WebUI.
- Dataset Path: Local path to the resume dataset CSV file. Includes "Category" and "Resume" columns.
- RapidAPI Key (optional): Required for job search functionality. Obtain an API key from RapidAPI Jobs API.
- Web Search: Enable/disable web search for relevant job postings.
- Prompt templates: Customizable templates for all the steps
Required configuration in Open WebUI:
- API Key (Required): Obtain a Hugging Face API key from your HuggingFace account and set it in the tool's configuration in Open WebUI
- API URL (Optional): Uses Stability AI's SD 3.5 Turbo model as Default,Can be customized to use other HF text-to-image model endpoints such as flux
- User Customizable Template: Allows you to tailor the instructions given to the prompt-enhancing LLM.
- Show Status: Displays status updates during the enhancement process.
- Show Enhanced Prompt: Outputs the enhanced prompt to the chat window for visibility.
- Model ID: Select the specific model to use for prompt enhancement.
Select the pipe with the corresponding model, it show as this:
# Example usage in your prompt
"Create a fully-featured Single Page Application (SPA) for the conways game of life, including a responsive UI. No frameworks No preprocessor, No minifing, No back end, ONLY Clean and CORRECT HTML JS AND CSS PLAIN""
Select the pipe with the corresponding model, it show as this:
# Example usage in your prompt
"Do a research summary on "DPO laser LLM training"
1.Select the pipe in the Open WebUI interface.
2.Configure the valves (settings) for the desired conversation setup in the admin panel.
3 Start the conversation by sending a user message to the conversation pipe.
Usage:
- Select the Resume Analyzer Pipe in the Open WebUI interface.
- Configure the valves with the desired model, dataset path (optional), and other settings.
- Send a resume text as an attachment (make sure to user whle document setting) and a message to start the analysis process.
- Review the first impression, adversarial analysis, interview questions, and then ask for career advice.
Example Usage:
# Example usage in your prompt
Analyze this resume:
[Insert resume or resume text here]
The Resume Analyzer Pipe offers a comprehensive analysis of resumes, providing valuable insights and actionable feedback to help candidates improve their job prospects.
(Make sure to turn on the tool in chat before requesting it)
# Example usage in your prompt
Search for recent papers about "tree of thought"
# Example usage in your prompt
Create an image of "beutiful horse running free"
# Specify format
Create a landscape image of "a futuristic cityscape"
Use the custom Model template in the model selector. The filter will automatically process each user message before it's sent to the main LLM. Configure the valves to customize the enhancement process.
Both tools include comprehensive error handling for:
- Network issues
- API timeouts
- Invalid parameters
- Authentication errors (HF Image Generator)
Feel free to contribute to this project by:
- Forking the repository
- Creating your feature branch
- Committing your changes
- Opening a pull request
MIT License
-
Developed by Haervwe
-
Credit to the amazing teams behind:
And all model trainers out there providing these amazing tools.
For issues, questions, or suggestions, please open an issue on the GitHub repository.