markdown
π Fully Offline Chat User Interface for Large Language Models
Chat WebUI is an open-source, locally hosted web application that puts the power of conversational AI at your fingertips. With a user-friendly and intuitive interface, you can effortlessly interact with text, document and vision models and access a range of useful built-in tools to streamline your workflow.
- π¨ Inspired by ChatGPT: Experience the same intuitive interface, now with expanded capabilities
- π Multi-Model Support: Seamlessly switch between text and vision models to suit your needs
- π OpenAI Compatible: Compatible with all OpenAI API endpoints, ensuring maximum flexibility
- π§ Supports Reasoning Models: Harness the power of reasoning models for your tasks
- π€ Chat Exporting: Easily export your chat using JSON or Markdown format.
- π οΈ Smart Built-in Tools:
- π Web Search: Instantly find relevant information from across the web
- πΊ YouTube Video Summarizer: Save time with concise summaries of YouTube videos
- π Webpage Summarizer: Extract key points from webpages and condense them into easy-to-read summaries
- π§ͺ arXiv Paper Summarizer: Unlock insights from academic papers with LLM-powered summarization
- Clone the repository:
git clone https://github.com/Toy-97/Chat-WebUI.git
- Navigate to the project directory:
cd Chat-WebUI
- Install dependencies:
pip install -r requirements.txt
- Run the application:
python app.py
- Open a web browser and navigate to
http://localhost:5000
- Press setting button at the top right corner and setup Base URL and API Key
- Select model at top left side of screen
- Start chatting! π¬
If you are using Local LLM make sure to start the endpoint first before running app.py
Chat WebUI comes with a range of built-in tools that can be used to perform various tasks, such as:
- π Online web search
- πΊ YouTube video summarization
- π§ͺ arXiv paper and abstract summarization
- π Webpage text extraction
To use these tools, simply add @s
to the start of your query. For example:
@s latest premier league news
The tool will automatically call the right function based on the link you provide. For example, this will extract the website text:
@s Summarize this page https://www.promptingguide.ai/techniques/cot
You can include a YouTube URL in your query to obtain a summary of the video. For example:
@s what is this video about? https://www.youtube.com/watch?v=b4x8boB2KdI
The order of your query and URL does not matter, for example this will work too:
@s https://www.youtube.com/watch?v=b4x8boB2KdI what is this video about?
Alternatively, you can simply provide the URL to utilize the built-in prompt:
@s https://www.youtube.com/watch?v=b4x8boB2KdI
This will employ the built-in prompt to generate a concise summary of the video.
Include an arXiv URL in your query to receive a brief summary of the paper or abstract:
@s explain this paper to me https://arxiv.org/pdf/1706.03762
Abstract URLs are also supported:
@s simply explain this abstract https://arxiv.org/abs/1706.03762
You can also paste the link to utilize the built-in prompt:
@s https://arxiv.org/abs/1706.03762
This will employ the built-in prompt to generate a concise summary of the paper or abstract.
If the link is not a YouTube or arXiv URL, the application will attempt to extract the text from the webpage:
@s summarize this into key points https://www.promptingguide.ai/techniques/cot
This also has built-in prompt that you can use by simply pasting the url:
@s https://www.promptingguide.ai/techniques/cot
If you do not include a URL in your query, the application will perform a web search using the built-in prompt:
@s latest released movies
For optimal results, format your query in a manner similar to a Google search. You can find the reference URL used in the command prompt window.
Deep Query function has been changed in v1.1. Now pressing it will send thinking tag to the backend to force the model to think. For example the </think>
tag.
You can set specific start tag and end tag for thinking models in additional settings.
This should ensure future compatibility for various reasoning model in the future.
Make sure to set your start tag and end tag in additional settings before using reasoning model.
You can adjust model parameters by clicking the Additional Settings button (βοΈ) next to the main Settings icon in the top-right corner.
Inside, youβll find four preset buttons:
- π― Precise β temperature = 0
- βοΈ Balanced β temperature = 0.5
- π¨ Creative β temperature = 1
Alternatively, choose Custom and enter your own sampler settings as comma-separated key=value pairs, for example:
temperature=0, top_p=0.3, reasoning_effort=high
Toggle it if you want to chat without saving it to conversation history.
There is export button that will appear when you hover at the bottom left corner. This allows you to export your chat using JSON or Markdown format.
You can drag and drop images and any text documents into the chat window. RAG is currently not supported, so all text document files will use the full text as context.
Contributions are welcome! If you'd like to contribute to the project, please fork the repository and submit a pull request.
Open-source and freely available under the MIT License. Check the LICENSE file for specifics.