Skip to content
10 changes: 9 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,12 @@ spring_ai/target/**
spring_ai/create_user.sql
spring_ai/drop.sql
src/client/spring_ai/target/classes/*
api_server_key
api_server_key
src/client/mcp/rag/optimizer_settings.json
src/client/mcp/rag/pyproject.toml
src/client/mcp/rag/main.py
src/client/mcp/rag/.python-version
src/client/mcp/rag/uv.lock
src/client/mcp/rag/node_modules/
src/client/mcp/rag/package-lock.json
src/client/mcp/rag/package.json
178 changes: 178 additions & 0 deletions src/client/mcp/rag/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,178 @@

# MCP Server for a tested AI Optimizer & Toolkit configuration

**Version:** *Developer preview*

## Introduction
This document describe how to re-use the configuration tested in the **AI Optimizer & Toolkit** an expose it as an MCP tool to a local **Claude Desktop** and how to setup as a remote MCP server. This early draft implementation utilizes the `stdio` and `sse` to interact between the agent dashboard, represented by the **Claude Desktop**, and the tool.

**NOTICE**: Only `Ollama` or `OpenAI` configurations are currently supported. Full support will come.

## Pre-requisites.
You need:
- Node.js: v20.17.0+
- npx/npm: v11.2.0+
- uv: v0.7.10+
- Claude Desktop free

## Setup
With **[`uv`](https://docs.astral.sh/uv/getting-started/installation/)** installed, run the following commands in your current project directory `<PROJECT_DIR>/src/client/mcp/rag/`:

```bash
uv init --python=3.11 --no-workspace
uv venv --python=3.11
source .venv/bin/activate
uv add mcp langchain-core==0.3.52 oracledb~=3.1 langchain-community==0.3.21 langchain-huggingface==0.1.2 langchain-openai==0.3.13 langchain-ollama==0.3.2
```

## Export config
In the **AI Optimizer & Toolkit** web interface, after tested a configuration, in `Settings/Client Settings`:

![Client Settings](./images/export.png)

* select the checkbox `Include Sensitive Settings`
* press button `Download Settings` to download configuration in the project directory: `src/client/mcp/rag` as `optimizer_settings.json`.
* in `<PROJECT_DIR>/src/client/mcp/rag/rag_base_optimizer_config_mcp.py` change filepath with the absolute path of your `optimizer_settings.json` file.


## Standalone client
There is a client that you can run without MCP via commandline to test it:

```bash
uv run rag_base_optimizer_config.py
```

## Quick test via MCP "inspector"

* Run the inspector:

```bash
npx @modelcontextprotocol/inspector uv run rag_base_optimizer_config_mcp.py
```

* connect to the port `http://localhost:6274/` with your browser
* setup the `Inspector Proxy Address` with `http://127.0.0.1:6277`
* test the tool developed.


## Claude Desktop setup

* In **Claude Desktop** application, in `Settings/Developer/Edit Config`, get the `claude_desktop_config.json` to add the references to the local MCP server for RAG in the `<PROJECT_DIR>/src/client/mcp/rag/`:
```json
{
"mcpServers": {
...
,
"rag":{
"command":"bash",
"args":[
"-c",
"source <PROJECT_DIR>/src/client/mcp/rag/.venv/bin/activate && uv run <PROJECT_DIR>/src/client/mcp/rag/rag_base_optimizer_config_mcp.py"
]
}
}
}
```
* In **Claude Desktop** application, in `Settings/General/Claude Settings/Configure`, under `Profile` tab, update fields like:
- `Full Name`
- `What should we call you`

and so on, putting in `What personal preferences should Claude consider in responses?`
the following text:

```
#INSTRUCTION:
Always call the rag_tool tool when the user asks a factual or information-seeking question, even if you think you know the answer.
Show the rag_tool message as-is, without modification.
```
This will impose the usage of `rag_tool` in any case.

**NOTICE**: If you prefer, in this agent dashboard or any other, you could setup a message in the conversation with the same content of `Instruction` to enforce the LLM to use the rag tool as well.

* Restart **Claude Desktop**.

* You will see two warnings on rag_tool configuration: they will disappear and will not cause any issue in activating the tool.

* Start a conversation. You should see a pop up that ask to allow the `rag` tool usage to answer the questions:

![Rag Tool](./images/rag_tool.png)

If the question is related to the knowledge base content stored in the vector store, you will have an answer based on that information. Otherwise, it will try to answer considering information on which has been trained the LLM o other tools configured in the same Claude Desktop.


## Make a remote MCP server the RAG Tool

In `rag_base_optimizer_config_mcp.py`:

* Update the absolute path of your `optimizer_settings.json`. Example:

```python
rag.set_optimizer_settings_path("/Users/cdebari/Documents/GitHub/ai-optimizer-mcp-export/src/client/mcp/rag/optimizer_settings.json")
```

* Substitute `Local` with `Remote client` line:

```python
#mcp = FastMCP("rag", port=8001) #Remote client
mcp = FastMCP("rag") #Local
```

* Substitute `stdio` with `sse` line of code:
```python
mcp.run(transport='stdio')
#mcp.run(transport='sse')
```

* Start MCP server in another shell with:
```bash
uv run rag_base_optimizer_config_mcp.py
```


## Quick test

* Run the inspector:

```bash
npx @modelcontextprotocol/inspector
```

* connect the browser to `http://127.0.0.1:6274`

* set the Transport Type to `SSE`

* set the `URL` to `http://localhost:8001/sse`

* test the tool developed.



## Claude Desktop setup for remote/local server
Claude Desktop, in free version, not allows to connect remote server. You can overcome, for testing purpose only, with a proxy library called `mcp-remote`. These are the options.
If you have already installed Node.js v20.17.0+, it should work:

* replace `rag` mcpServer, setting in `claude_desktop_config.json`:
```json
{
"mcpServers": {
"remote": {
"command": "npx",
"args": [
"mcp-remote",
"http://127.0.0.1:8001/sse"
]
}
}
}
```
* restart Claude Desktop.

**NOTICE**: If you have any problem running, check the logs if it's related to an old npx/nodejs version used with mcp-remote library. Check with:
```bash
nvm -list
```
if you have any other versions available than the default. It could happen that Claude Desktop uses the older one. Try to remove any other nvm versions available to force the use the only one avalable, at minimum v20.17.0+.

* restart and test as remote server


Binary file added src/client/mcp/rag/cover.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/client/mcp/rag/images/export.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/client/mcp/rag/images/rag_tool.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
79 changes: 79 additions & 0 deletions src/client/mcp/rag/optimizer_utils/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
from langchain_openai import ChatOpenAI
from langchain_openai import OpenAIEmbeddings
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_ollama import OllamaEmbeddings
from langchain_ollama import OllamaLLM

from langchain_community.vectorstores.utils import DistanceStrategy

from langchain_community.vectorstores import oraclevs
from langchain_community.vectorstores.oraclevs import OracleVS
import oracledb


def get_llm(data):
llm={}
llm_config = data["ll_model_config"][data["user_settings"]["ll_model"]["model"]]
api=llm_config["api"]
url=llm_config["url"]
api_key=llm_config["api_key"]
model=data["user_settings"]["ll_model"]["model"]
print(f"CHAT_MODEL: {model} {api} {url} {api_key}")
if api == "ChatOllama":
# Initialize the LLM
llm = OllamaLLM(
model=model,
base_url=url
)
elif api == "OpenAI":

llm=llm = ChatOpenAI(
model=model,
api_key=api_key
)
return llm

def get_embeddings(data):
embeddings={}
model=data["user_settings"]["rag"]["model"]
api=data["embed_model_config"][model]["api"]
url=data["embed_model_config"][model]["url"]
api_key=data["embed_model_config"][model]["api_key"]
print(f"EMBEDDINGS: {model} {api} {url} {api_key}")
embeddings = {}
if api=="OllamaEmbeddings":
embeddings=OllamaEmbeddings(
model=model,
base_url=url)
elif api == "OpenAIEmbeddings":
print("BEFORE create embbedding")
embeddings = OpenAIEmbeddings(
model=model,
api_key=api_key
)
print("AFTER create emebdding")
return embeddings

def get_vectorstore(data,embeddings):

config=data["database_config"][data["user_settings"]["rag"]["database"]]

conn23c = oracledb.connect(user=config["user"],
password=config["password"], dsn=config["dsn"])

print("DB Connection successful!")
metric=data["user_settings"]["rag"]["distance_metric"]

dist_strategy=DistanceStrategy.COSINE
if metric=="COSINE":
dist_strategy=DistanceStrategy.COSINE
elif metric == "EUCLIDEAN":
dist_strategy=DistanceStrategy.EUCLIDEAN

print("1")
a=data["user_settings"]["rag"]["vector_store"]
print(f"{a}")
print(f"BEFORE KNOWLEDGE BASE")
print(embeddings)
knowledge_base = OracleVS(conn23c, embeddings, data["user_settings"]["rag"]["vector_store"], dist_strategy)
return knowledge_base
84 changes: 84 additions & 0 deletions src/client/mcp/rag/optimizer_utils/rag.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
"""
Copyright (c) 2024, 2025, Oracle and/or its affiliates.
Licensed under the Universal Permissive License v1.0 as shown at http://oss.oracle.com/licenses/upl.
"""
from typing import List
from mcp.server.fastmcp import FastMCP
import os
from dotenv import load_dotenv
#from sentence_transformers import CrossEncoder
#from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
import json
import logging
logging.basicConfig(level=logging.DEBUG)

from optimizer_utils import config

_optimizer_settings_path= ""

def set_optimizer_settings_path(path: str):
global _optimizer_settings_path
_optimizer_settings_path = path

def rag_tool_base(question: str) -> str:
"""
Use this tool to answer any question that may benefit from up-to-date or domain-specific information.

Args:
question: the question for which are you looking for an answer

Returns:
JSON string with answer
"""
with open(_optimizer_settings_path, "r") as file:
data = json.load(file)
try:

embeddings = config.get_embeddings(data)

print("Embedding successful!")
knowledge_base = config.get_vectorstore(data,embeddings)
print("DB Connection successful!")

print("knowledge_base successful!")
user_question = question
#result_chunks=knowledge_base.similarity_search(user_question, 5)

for d in data["prompts_config"]:
if d["name"]==data["user_settings"]["prompts"]["sys"]:

rag_prompt=d["prompt"]

template = """DOCUMENTS: {context} \n"""+rag_prompt+"""\nQuestion: {question} """
#template = """Answer the question based only on the following context:{context} Question: {question} """
print(template)
prompt = PromptTemplate.from_template(template)
print("before retriever")
print(data["user_settings"]["rag"]["top_k"])
retriever = knowledge_base.as_retriever(search_kwargs={"k": data["user_settings"]["rag"]["top_k"]})
print("after retriever")


# Initialize the LLM
llm = config.get_llm(data)

chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
print("pre-chain successful!")
answer = chain.invoke(user_question)

#print(f"Results provided for question: {question}")
#print(f"{answer}")
except Exception as e:
print(e)
print("Connection failed!")
answer=""

return f"{answer}"
Loading