-
Notifications
You must be signed in to change notification settings - Fork 6
Home
nqngo edited this page Feb 18, 2024
·
6 revisions
The current design of this bot provides 2 command function on Discord.
-
/ask <model(optional):selectable model available> <prompt(required):the prompt to send>
: Submit the question directly to the LLM backend. -
/reviewresume <url(required):link to the chat>
Fetch the attached PDF in the linked chat and feed to LLM backend to review. Will require the use of PDF parser. This command could also be a context command.
This component diagram will be updated as the project matures.
---
title: LLM Assistant High Level Components Diagram
---
graph LR
subgraph gpu1["π» GPU-01"]
direction TB
subgraph docker1["π docker"]
direction TB
ollama <--> B[NVIDIA Container Toolkit]
end
model["πΎ /mnt/models"] -. mount .-> ollama
end
subgraph bastion["π» BASTION"]
direction TB
subgraph docker2["π docker"]
subgraph llm["π€ llm-assistant"]
direction LR
subgraph main.py
direction TB
ca("command_ask()")
cr("command_review()")
end
subgraph backend.py
direction TB
oai("litellm_openai()")
llama("litellm_ollama()")
end
end
end
end
devops(("π€ DevOps"))
discord(("πͺ Discord"))
openai(("π§ OpenAI API"))
ca -. "model:gpt3.5|gpt4" .-> oai
ca -. "model:llama2|minstral|others" .-> llama
llama -. API .-> ollama
oai -. API .-> openai
devops -- deploys --> docker1 & docker2
discord <-. websocket .-> main.py
mindmap
root((Github Board))
System Team
Deploy docker images
Build pipeline to retrieve AI model
Automate installation
Setup Github CI
Dev Team
Write Discord bot
Write tests
Build & Package Discord bot in Docker image
AI Team
Test performance of new AI models
Construct prompts for bots
Choose AI models
Publish AI models benchmark