Skip to content

Latest commit

 

History

History
144 lines (120 loc) · 6.92 KB

README.md

File metadata and controls

144 lines (120 loc) · 6.92 KB

vnc-lm

Introduction

vnc-lm is a Discord bot that lets you talk with and configure language models in your server. It uses ollama to manage and run different models.

image
Web scraping example
image
Model pulling example

Features

Model Management

Change models using the /model command and adjust parameters like num_ctx, system_prompt, and temperature. Notifications are sent when models load into RAM. The bot responds wherever /model was last used. Models can be removed with the remove parameter. Download models directly through Discord by messaging a tag URL.

https://ollama.com/library/phi3.5:3.8b-mini-instruct-q2_K

Model downloading and removal is turned off by default and can be enabled by configuring the .env.

QoL Improvements

Streaming message generation with messages longer than 1500 characters split into pages. Message attachments like text-based files, web links, and screenshots can be added into the context window.

Switch between conversations by clicking rejoin conversation in the Discord context menu. Conversations can be continued from any point and with different models. All messages are cached and organized into conversations. Entrypoint.sh helps the cache file persist across Docker containers.

Messaging stop will end message generation early. Messaging reset returns models to their default configuration.

Requirements

  1. Ollama: Get up and running with Llama 3.1, Mistral, Gemma 2, and other large language models.
  2. Docker: Docker is a platform designed to help developers build, share, and run container applications. We handle the tedious setup, so you can focus on the code.

Environment Configuration

  1. Clone the repository with git clone https://github.com/jake83741/vnc-lm.git
  2. CD into the directory with cd vnc-lm
  3. Rename .env.example to .env in the project root directory. Configure the .env file:

  • TOKEN=: Your Discord bot token. Use the Discord Developer Portal to create this. Check the necessary permissions for your Discord bot.
  • OLLAMAURL=: The URL of your Ollama server. See API documentation. Docker requires http://host.docker.internal:11434
  • NUM_CTX= Value controlling context window size. Defaults to 2048.
  • TEMPERATURE= Value controlling the randomness of responses. Defaults to 0.4.
  • KEEP_ALIVE=: Value controlling how long a model stays in memory. Defaults to 45m.
  • CHARACTER_LIMIT= Value controlling the character limit for page embeds. Defaults to 1500.
  • API_RESPONSE_UPDATE_FREQUENCY= Value controlling amount of API responses to chunk before updating message. A low number will cause Discord API to throttle. Defaults to 10.
  • ADMIN= Discord user ID. This will enable downloading and removing models.
  • REQUIRE_MENTION= Require the bot to be mentioned or not. Defaults to false.

Docker Installation (Preferred)

docker compose up --build

Manual Installation


npm install
npm run build
npm start

Usage

  1. /model: Load, configure, or remove a language model. Optional parameters for num_ctx, system_prompt, temperature, and remove.

image
/model example

  1. Rejoin Conversation: Rejoin an old conversation at a specific point. Messages up to the selected point in the conversation will also be included.

image
Rejoin Conversation example

  1. /help: Instructions for how to use the bot.

image
/help example

Tree Diagram

.
├── LICENSE
├── README.md
├── docker-compose.yaml
├── dockerfile
├── entrypoint.sh
├── .env.example
├── imgs
├── package.json
├── src
│   ├── api-connections
│   │   ├── api-requests.ts
│   │   ├── library-refresh.ts
│   │   ├── model-loader.ts
│   │   └── model-pull.ts
│   ├── bot.ts
│   ├── commands
│   │   ├── command-registry.ts
│   │   ├── help-command.ts
│   │   ├── model-command.ts
│   │   ├── optional-params
│   │   │   └── remove.ts
│   │   └── rejoin-conversation.ts
│   ├── functions
│   │   ├── ocr-function.ts
│   │   └── scraper-function.ts
│   ├── managers
│   │   ├── cache-manager.ts
│   │   ├── message-manager.ts
│   │   └── page-manager.ts
│   ├── message-generation
│   │   ├── chunk-generation.ts
│   │   ├── message-create.ts
│   │   └── message-preprocessing.ts
│   └── utils.ts
└── tsconfig.json

Notes

  1. If an issue arises with the Docker set-up, change the Ollama_Host environment variable to 0.0.0.0. See server documentation.
  2. Attachments with large amounts of text will require a higher num_ctx value to work properly.

Dependencies


  1. Axios: Promise based HTTP client for the browser and node.js.
  2. Discord.js: A powerful JavaScript library for interacting with the Discord API.
  3. dotenv: Loads environment variables from .env for nodejs projects.
  4. tesseract.js: A javascript library that gets words in almost any language out of images.
  5. jsdom: A JavaScript implementation of various web standards, for use with Node.js
  6. readbility: A standalone version of the readability lib

License

This project is licensed under the MIT License.