-
Notifications
You must be signed in to change notification settings - Fork 1
Usage
This guide provides detailed instructions on how to use the services provided by the LLM Agentic Tool Mesh platform, both from code and through examples that demonstrate how to create your first tool.
Before proceeding, ensure that you have completed all installation steps and that all necessary dependencies are installed. If you need assistance, refer to the Installation Guide.
LLM Agentic Tool Mesh provides a self-service platform with several packages designed to meet various needs:
- System Package: Includes services for managing tools on both the client and server sides, as well as utilities like logging.
- Chat Package: Offers services for creating chat applications, including prompt management, LLM model integration, and memory handling.
- Agents Package: Provides agentic services to create a Reasoning Engine or Multi-Agent task force.
- RAG Package: Contains services for injecting, storing, and retrieving data using Retrieval-Augmented Generation (RAG). This package should be explicitly declared during installation.
You can install the relevant packages using pip from this repository:
pip install 'llmesh[rag]'
Once installed, you can import and use the packages in your code. Below is an example that demonstrates how to initialize an LLM model and invoke it:
from athon.chat import ChatModel
from langchain.schema import HumanMessage, SystemMessage
# Example configuration for the Chat Model
LLM_CONFIG = {
'type': 'LangChainChatOpenAI',
'api_key': 'your-api-key-here',
'model_name': 'gpt-4o',
'temperature': 0.7
}
# Initialize the Chat Model with LLM configuration
chat = ChatModel.create(LLM_CONFIG)
# Define the prompts
prompts = [
SystemMessage(content="Convert the message to pirate language"),
HumanMessage(content="Today is a sunny day and the sky is blue")
]
# Invoke the model with the prompts
result = chat.invoke(prompts)
# Handle the response
if result.status == "success":
print(f"COMPLETION:\n{result.content}")
else:
print(f"ERROR:\n{result.error_message}")
You can find more info about the platform services in Software Architecture
We have developed a series of web applications and tools, complete with examples, to demonstrate the capabilities of LLM Agentic Tool Mesh.
-
Chatbot (examples/app_chatbot): This chatbot is capable of reasoning and invoking the appropriate LLM tools to perform specific actions. You can configure the chatbot using files that define the LLM Agentic Tool Mesh platform services, project settings, toolkit, and memory configurations. The web app orchestrates both local and remote LLM tools, allowing them to define their own HTML interfaces, which support the presentation of text, images, and code.
-
Admin Panel (examples/app_backpanel): The admin panel enables the configuration of basic LLM tools to perform actions via LLM calls. It allows you to set the system prompt, select the LLM model, and define the LLM tool interface, simplifying the process of configuring LLM tool interfaces.
-
Agentic Memory (examples/app_memory): This application uses an LLM to categorize messages as either personal or project-related, storing them in the appropriate memory storage. Different chatbots can access and utilize the project memory, facilitating information sharing and collaboration within teams.
-
Basic Cowriter (examples/tool_copywriter): A tool that rewrites text, providing explanations for enhancements and changes.
-
Temperature Finder (examples/tool_api): Fetches and displays the current temperature for a specified location by utilizing a public API.
-
Temperature Analyzer (examples/tool_analyzer): Generates code using a language model to analyze historical temperature data and create visual charts for better understanding.
-
Telco Expert (examples/tool_rag): A RAG tool that provides quick and accurate access to 5G Specifications.
-
OpenAPI Manager (examples/tool_agents): A multi-agent tool that reads OpenAPI documentation and provides users with relevant information based on their queries.
You can run the tools and web applications individually or use the provided run_examples.sh
script to run them all together. Once everything is started, you can access the chatbot app at https://127.0.0.1:5001/ and the back panel at https://127.0.0.1:5011/.
Depending on whether you are using ChatGPT or other models, you will need to set the LLM parameters accordingly in the app and tool configuration files. Below are examples of how to configure the parameters for each environment:
Update the configuration file (i.e., config.yaml
) with the following settings:
# LLM settings normally inside model or llm fields
type: LangChainChatOpenAI
model_name: gpt-4o
api_key: $ENV{OPENAI_API_KEY}
temperature: 0
seed: 42
Update the configuration file (i.e., config.yaml
) with the following settings:
# LLM settings normally inside model or llm fields
type: LangChainAzureChatOpenAI
azure_deployment: $ENV{HPE_DEPLOYMENT}
api_version: "2023-10-01-preview"
endpoint: $ENV{HPE_ENDPOINT}
api_key: $ENV{HPE_API_KEY}
temperature: 0
seed: 42
These changes should be made for all tools and applications. By default, they are set to use ChatGPT. To switch to ChatHPE, simply modify the relevant parameters as shown above.
Each tool or app configuration file, such as examples/add_chatbot/config.yaml
, can be updated similarly:
chat:
type: LangChainForOpenAI # Update to match the specific LLM environment
system_prompt: $PROMPT{chat_system_prompt}
model:
# Include the LLM model configuration details here
If you'd like to create your own tool from a template, detailed instructions are available in the Guide to Creating a New Athon Tool.
When creating a new web app, you can build upon the existing examples. Considering that all services are fully parameterized, you have the flexibility to design various user experience panels. For instance, the examples include a chatbot as a user interface and an admin panel for configuring an LLM tool. However, you can also develop web apps to support tasks like deployment or to facilitate experiments aimed at optimizing service parameters for specific objectives.