Skip to content
Merged
26 changes: 23 additions & 3 deletions config/serverless.es.yml
Original file line number Diff line number Diff line change
Expand Up @@ -114,10 +114,10 @@ xpack.searchQueryRules.enabled: true
## Search Connectors in stack management
xpack.contentConnectors.ui.enabled: false

# Elastic Managed LLM
# Elastic Managed LLMs
xpack.actions.preconfigured:
Elastic-Managed-LLM:
name: Elastic Managed LLM
General-Purpose-LLM-v1:
name: General Purpose LLM v1
actionTypeId: .inference
exposeConfig: true
config:
Expand All @@ -126,3 +126,23 @@ xpack.actions.preconfigured:
inferenceId: ".rainbow-sprinkles-elastic"
providerConfig:
model_id: "rainbow-sprinkles"
General-Purpose-LLM-v2:
name: General Purpose LLM v2
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v2-chat_completion"
providerConfig:
model_id: "gp-llm-v2"
General-Purpose-LLM-v3:
name: General Purpose LLM v3
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v3-chat_completion"
providerConfig:
model_id: "gp-llm-v3"
34 changes: 27 additions & 7 deletions config/serverless.oblt.complete.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,35 @@ xpack.features.overrides:
### Workflows Management should be moved from Analytics category to the Observability one.
workflowsManagement.category: 'observability'

# Elastic Managed LLM
# Elastic Managed LLMs
xpack.actions.preconfigured:
Elastic-Managed-LLM:
name: Elastic Managed LLM
General-Purpose-LLM-v1:
name: General Purpose LLM v1
actionTypeId: .inference
exposeConfig: true
config:
provider: 'elastic'
taskType: 'chat_completion'
inferenceId: '.rainbow-sprinkles-elastic'
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".rainbow-sprinkles-elastic"
providerConfig:
model_id: 'rainbow-sprinkles'
model_id: "rainbow-sprinkles"
General-Purpose-LLM-v2:
name: General Purpose LLM v2
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v2-chat_completion"
providerConfig:
model_id: "gp-llm-v2"
General-Purpose-LLM-v3:
name: General Purpose LLM v3
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v3-chat_completion"
providerConfig:
model_id: "gp-llm-v3"
26 changes: 23 additions & 3 deletions config/serverless.security.complete.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ xpack.features.overrides:
### Workflows Management should be moved from Analytics category to the Security one.
workflowsManagement.category: "security"

# Elastic Managed LLM
# Elastic Managed LLMs
xpack.actions.preconfigured:
Elastic-Managed-LLM:
name: Elastic Managed LLM
General-Purpose-LLM-v1:
name: General Purpose LLM v1
actionTypeId: .inference
exposeConfig: true
config:
Expand All @@ -19,3 +19,23 @@ xpack.actions.preconfigured:
inferenceId: ".rainbow-sprinkles-elastic"
providerConfig:
model_id: "rainbow-sprinkles"
General-Purpose-LLM-v2:
name: General Purpose LLM v2
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v2-chat_completion"
providerConfig:
model_id: "gp-llm-v2"
General-Purpose-LLM-v3:
name: General Purpose LLM v3
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v3-chat_completion"
providerConfig:
model_id: "gp-llm-v3"
34 changes: 27 additions & 7 deletions config/serverless.security.search_ai_lake.yml
Original file line number Diff line number Diff line change
Expand Up @@ -94,15 +94,35 @@ xpack.fleet.integrationsHomeOverride: '/app/security/configurations/integrations
xpack.fleet.prereleaseEnabledByDefault: true
xpack.fleet.internal.registry.searchAiLakePackageAllowlistEnabled: true

# Elastic Managed LLM
# Elastic Managed LLMs
xpack.actions.preconfigured:
Elastic-Managed-LLM:
name: Elastic Managed LLM
General-Purpose-LLM-v1:
name: General Purpose LLM v1
actionTypeId: .inference
exposeConfig: true
config:
provider: 'elastic'
taskType: 'chat_completion'
inferenceId: '.rainbow-sprinkles-elastic'
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".rainbow-sprinkles-elastic"
providerConfig:
model_id: 'rainbow-sprinkles'
model_id: "rainbow-sprinkles"
General-Purpose-LLM-v2:
name: General Purpose LLM v2
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v2-chat_completion"
providerConfig:
model_id: "gp-llm-v2"
General-Purpose-LLM-v3:
name: General Purpose LLM v3
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v3-chat_completion"
providerConfig:
model_id: "gp-llm-v3"
26 changes: 23 additions & 3 deletions config/serverless.workplaceai.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ xpack.contentConnectors.enabled: false
## Disable Kibana Product Intercept
xpack.product_intercept.enabled: false

# Elastic Managed LLM
# Elastic Managed LLMs
xpack.actions.preconfigured:
Elastic-Managed-LLM:
name: Elastic Managed LLM
General-Purpose-LLM-v1:
name: General Purpose LLM v1
actionTypeId: .inference
exposeConfig: true
config:
Expand All @@ -41,3 +41,23 @@ xpack.actions.preconfigured:
inferenceId: ".rainbow-sprinkles-elastic"
providerConfig:
model_id: "rainbow-sprinkles"
General-Purpose-LLM-v2:
name: General Purpose LLM v2
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v2-chat_completion"
providerConfig:
model_id: "gp-llm-v2"
General-Purpose-LLM-v3:
name: General Purpose LLM v3
actionTypeId: .inference
exposeConfig: true
config:
provider: "elastic"
taskType: "chat_completion"
inferenceId: ".gp-llm-v3-chat_completion"
providerConfig:
model_id: "gp-llm-v3"
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ export const getMessageFromRawResponse = (
}
};

const ELASTIC_LLM_CONNECTOR_ID = 'Elastic-Managed-LLM';
const ELASTIC_LLM_CONNECTOR_IDS = ['Elastic-Managed-LLM', 'General-Purpose-LLM-v1'];
Comment thread
Samiul-TheSoccerFan marked this conversation as resolved.

/**
* Returns a default connector if there is only one connector
Expand Down Expand Up @@ -67,7 +67,7 @@ export const getDefaultConnector = (
// In case the default connector is not set or is invalid, return the prioritized connector
const prioritizedConnectors = [...validConnectors].sort((a, b) => {
const priority = (connector: (typeof validConnectors)[number]) => {
if (connector.id === ELASTIC_LLM_CONNECTOR_ID) return 0;
if (ELASTIC_LLM_CONNECTOR_IDS.includes(connector.id)) return 0;
if (
connector.apiProvider === OpenAiProviderType.OpenAi ||
connector.apiProvider === OpenAiProviderType.AzureAi
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,12 @@ export const request = <T = unknown>({
...options,
});
};
export const INTERNAL_INFERENCE_CONNECTORS = ['Elastic-Managed-LLM'];
export const INTERNAL_INFERENCE_CONNECTORS = [
'Elastic-Managed-LLM',
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question if we need Elastic-Managed-LLM in here and in the following file.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya, it looks like this PR removes Elastic-Managed-LLM and adds General-Purpose-LLM-v{1,2,3}.

Anyone who has a saved object that references Elastic-Managed-LLM won't have access to it after this change. Will be returned as "not found". I'm not sure what, if anything, might have a reference. I don't think these can be used directly as actions from alerting rules. If they can, then I think this will be a problem.

Or perhaps these connector id's are never referenced directly, just used internally?

Copy link
Copy Markdown
Contributor Author

@alvarezmelissa87 alvarezmelissa87 Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pmuellr - thanks for taking a look! 🙏
This updates the name of the preconfigured connector -Elastic-Managed-LLM to General-Purpose-LLM-v1 - the underlying inference endpoint and model are unchanged. I also updated the places in kibana that reference the connector id directly to look for both the old and new name. We've updated the naming/connector ids for preconfigured connectors in the past and it should not present a problem. The connector ids are internal - it's the underlying inference endpoint ids that are used.

The additional preconfigured connectors added (v2, v3) will only be returned by the actions client getAll once the backing inference endpoint exists, so no issue there either.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. I'd go ahead and try testing an upgrade, if you haven't already, by running

yarn es serverless --license trial --projectType $PROJECT_TYPE

and then run Kibana from main to do something to get a reference to Elastic-Managed-LLM, then kill Kibana, then run the same from your PR to see if everything survived "migration".

yarn start --no-base-path --serverless $PROJECT_TYPE

'General-Purpose-LLM-v1',
'General-Purpose-LLM-v2',
'General-Purpose-LLM-v3',
];
export const INTERNAL_CLOUD_CONNECTORS = ['Elastic-Cloud-SMTP'];

export const getConnectors = () =>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,12 @@ import {
} from '../../../../common/endpoint/constants';

const INTERNAL_CLOUD_CONNECTORS = ['Elastic-Cloud-SMTP'];
const INTERNAL_INFERENCE_CONNECTORS = ['Elastic-Managed-LLM'];
const INTERNAL_INFERENCE_CONNECTORS = [
'Elastic-Managed-LLM',
'General-Purpose-LLM-v1',
'General-Purpose-LLM-v2',
'General-Purpose-LLM-v3',
];
const INTERNAL_CONNECTORS = [...INTERNAL_CLOUD_CONNECTORS, ...INTERNAL_INFERENCE_CONNECTORS];

export const createBedrockAIConnector = (connectorName?: string) =>
Expand Down
Loading