-
Notifications
You must be signed in to change notification settings - Fork 168
Description
Currently, the GitLab MCP server reads the PAT (GITLAB_PERSONAL_ACCESS_TOKEN) from the environment at startup and applies it globally to all requests.
const GITLAB_PERSONAL_ACCESS_TOKEN = process.env.GITLAB_PERSONAL_ACCESS_TOKEN; //.. //... //... // Modify DEFAULT_HEADERS to include agent configuration const DEFAULT_HEADERS: Record<string, string> = { Accept: "application/json", "Content-Type": "application/json", }; if (IS_OLD) { DEFAULT_HEADERS["Private-Token"] = ${GITLAB_PERSONAL_ACCESS_TOKEN}; } else { DEFAULT_HEADERS["Authorization"] = Bearer ${GITLAB_PERSONAL_ACCESS_TOKEN}; }
This approach limits multi-user scenarios and per-user access control. I have seen discussions/approaches in #130 (which was rolled back) and #121
In my use case, i am not adding the mcp server to an IDE (vscode, claude code, etc..). I have a multi-tenant AI Agent software.
User connects Gitlab Application which he will authorize to access his account -> I will get back a token i can use to interact with gitlab's REST API on his behalf (to make it automated instead of telling the user to generate a PAT from settings).
So, i want the mcp server docker container to be always running without being limited with a hardcoded PAT in its env (act like remote server). and whenever user sends request to my backend, i pass his related token in headers of the request to the mcp server which he will use to invoke the tools (REST API endpoints wrappers) with appropriate access (for exp: each user can query his private stuff). Would that be counter-intuitive?
Sadly, with a static PAT baked in, the server cannot use user-specific credentials unless a new container is instantiated for each user. (which i think isn't scalable in my case)