Skip to content

[SigEvents] Remove custom model selection, migrate index patterns to uiSettings#260741

Merged
achyutjhunjhunwala merged 19 commits into
elastic:mainfrom
achyutjhunjhunwala:use-model-settings-inside-sig-event
Apr 2, 2026
Merged

[SigEvents] Remove custom model selection, migrate index patterns to uiSettings#260741
achyutjhunjhunwala merged 19 commits into
elastic:mainfrom
achyutjhunjhunwala:use-model-settings-inside-sig-event

Conversation

@achyutjhunjhunwala
Copy link
Copy Markdown
Contributor

@achyutjhunjhunwala achyutjhunjhunwala commented Apr 1, 2026

Summary

Cleans up the Significant Events settings page by removing our own per-feature model selection infrastructure and delegating connector resolution to the platform's Inference Feature Registry.

Before: The Settings tab had 3 connector dropdowns (KI extraction, query generation, discovery). Selections were stored in a custom saved object (streams-significant-events-settings). Background tasks read that SO to decide which LLM to call.

After: The Settings tab has a link to Stack Management → Model Settings, where admins configure connector overrides centrally (same place as every other inference feature in Kibana). The index patterns field stays on the page but is now stored as a Kibana uiSetting instead of the custom SO. The custom SO is gone entirely.


What changed and why

Settings tab (tab.tsx)

The 3 connector dropdowns, their useLoadConnectors hooks, stale-connector callouts, and no-default callouts are all removed. In their place: a short paragraph and an EuiLink to the Model Settings page, generated via MANAGEMENT_APP_LOCATOR (the same locator pattern used by SLO, Synthetics, etc.).

The index patterns textarea stays. It now reads the initial value from core.uiSettings.get() and saves via core.settings.client.set() — the built-in uiSettings HTTP API handles persistence, no custom route needed.

New uiSetting: observability:streamsSigEventsIndexPatterns

  • Registered in feature_flags.ts alongside the other streams flags
  • readonly: true + readonlyMode: 'ui' — hidden from the Advanced Settings page, but the custom settings page can still read and write it programmatically
  • Default value: logs* (same as the old SO default)
  • Per-space (namespace scope), consistent with how other streams settings work

Task definitions — connector resolution

All 4 tasks (features_identification, significant_events_queries_generation, insights_discovery, onboarding) previously read connectorId* fields from the custom SO and passed them through resolveConnectorIdAndCheckAllowlist. They now call getForFeature directly:

if (!taskContext.server.searchInferenceEndpoints) {
  throw new StatusError('Inference endpoints plugin is unavailable.', 503);
}
const { endpoints } = await taskContext.server.searchInferenceEndpoints
  .endpoints.getForFeature(FEATURE_ID, fakeRequest);
if (endpoints.length === 0) {
  throw new StatusError('No connector configured. Configure one in Model Settings.', 400);
}
const connectorId = endpoints[0].connectorId;

getForFeature resolves the connector using this priority order:

  1. Admin override set via the Model Settings page (stored in inference_settings SO)
  2. recommendedEndpoints from our feature registration (register_significant_events_inference_features.ts)
  3. Platform default connector

useIndexPatternsConfig hook

Replaced the API fetch (GET /internal/streams/_significant_events/settings) with a direct core.uiSettings.get() read. Return signature is unchanged so the only consumer (streams_view.tsx) needs no updates.


routes/utils/resolve_connector_id.ts is kept — it is still used by the significant events SSE endpoint and description generation route, which accept a caller-supplied connectorId with a fallback to the default AI connector.


Test updates

Core SO integration tests — two platform-level tests that track all registered SO types were updated to remove streams-significant-events-settings:

  • src/core/server/integration_tests/saved_objects/registration/type_registrations.test.ts
  • src/core/server/integration_tests/ci_checks/saved_objects/check_registered_types.test.ts

Evals suiteconfigureModelSelectionSettings() in kbn-evals-suite-significant-events previously called PUT /internal/streams/_significant_events/settings (now deleted). Updated to call PUT /internal/search_inference_endpoints/settings instead — the platform admin override API that getForFeature checks first.


Data migration

Users with custom index patterns saved in the old SO will have them reset to logs* after this change. Acceptable given the feature is in technical preview with limited adoption. No migration script — the old orphaned SO data causes no errors (Kibana ignores unknown SO types).


How to test

  1. Go to Significant Events Discovery → Settings tab
  2. Verify the LLM section shows descriptive text and a Go to Model Settings link (opens Stack Management → Model Settings)
  3. Verify the index patterns field is still editable; save and confirm it persists across page reload
  4. In Stack Management → Model Settings, set a connector override for one of the streams sig events features
  5. Trigger a features identification task and check Kibana logs for:
    Using connector <id> for knowledge indicator extraction
    
    (Requires logging.loggers[plugins.streams].level: debug in kibana.dev.yml)
  6. Confirm the task used the connector configured in step 4

New Custom Settings Page

image

…ings. Move index selector to Kibana Advance settings
@achyutjhunjhunwala achyutjhunjhunwala self-assigned this Apr 1, 2026
@achyutjhunjhunwala achyutjhunjhunwala added release_note:skip Skip the PR/issue when compiling release notes backport:skip This PR does not require backporting Feature:SigEvents Significant events feature, related to streams and rules/alerts (RnA) Team:SigEvents Project team working on Significant Events models:eis/anthropic-claude-4.6-opus Run LLM evals against model: eis/anthropic-claude-4.6-opus models:eis/google-gemini-2.5-flash Run LLM evals against model: eis/google-gemini-2.5-flash models:eis/google-gemini-3.0-flash Run LLM evals against model: eis/google-gemini-3.0-flash models:eis/openai-gpt-5.2 Run LLM evals against model: eis/openai-gpt-5.2 models:eis/openai-gpt-oss-120b Run LLM evals against model: eis/openai-gpt-oss-120b models:eis/anthropic-claude-4.6-sonnet Run LLM evals against model: eis/anthropic-claude-4.6-sonnet models:judge:eis/google-gemini-3.1-pro Override LLM-as-a-judge connector for evals: eis/google-gemini-3.1-pro labels Apr 1, 2026
@achyutjhunjhunwala
Copy link
Copy Markdown
Contributor Author

/ci

@crespocarlos crespocarlos added evals:significant-events Run the significant-events @kbn/evals and removed models:eis/google-gemini-2.5-flash Run LLM evals against model: eis/google-gemini-2.5-flash labels Apr 1, 2026
@ruflin
Copy link
Copy Markdown
Contributor

ruflin commented Apr 1, 2026

Did manual testing of the PR, works as expected. When the default model is changed, a different model is used. Otherwise, all the default models are used.

@achyutjhunjhunwala
Copy link
Copy Markdown
Contributor Author

/ci-ralph

@achyutjhunjhunwala
Copy link
Copy Markdown
Contributor Author

/ralph fix the build issues and make it green

… projects

- Decrement SAVED_OBJECT_TYPES_COUNT from 155 to 154 after removing
  streams-significant-events-settings SO type
- Update streams_app tsconfig.json: remove unused @kbn/inference-connectors,
  add required @kbn/deeplinks-management
- Regenerate moon.yml to match tsconfig changes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@achyutjhunjhunwala
Copy link
Copy Markdown
Contributor Author

Fixed the CI build failures from build #420664:

  1. SAVED_OBJECT_TYPES_COUNT (Checks failure): Decremented from 155 to 154 in src/core/packages/saved-objects/server-internal/src/object_types/index.ts to reflect the removal of the streams-significant-events-settings SO type.
  2. tsconfig.json (Quick Checks - ts_projects): Updated streams_app/tsconfig.json — removed unused @kbn/inference-connectors reference, added required @kbn/deeplinks-management (used by the new Management Settings link).
  3. moon.yml (Quick Checks - verify_moon_projects): Regenerated streams_app/moon.yml to match the tsconfig changes.

Verified the saved objects registration integration test passes locally.

Written by cursor, wait for @achyutjhunjhunwala to chime in

Copy link
Copy Markdown
Contributor

@tonyghiani tonyghiani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👌

@achyutjhunjhunwala achyutjhunjhunwala marked this pull request as ready for review April 2, 2026 08:41
@achyutjhunjhunwala achyutjhunjhunwala requested review from a team as code owners April 2, 2026 08:41
@macroscopeapp
Copy link
Copy Markdown
Contributor

macroscopeapp Bot commented Apr 2, 2026

Approvability

Verdict: Needs human review

This PR is a significant architectural change that removes a saved object type and migrates connector/model selection to use the centralized Inference Feature Registry. While mostly deletions, it changes runtime behavior for connector resolution and removes settings API endpoints. The author does not own any of the changed files, which are owned by multiple teams including obs-sig-events-team and kibana-core.

You can customize Macroscope's approvability policy. Learn more.

@achyutjhunjhunwala achyutjhunjhunwala enabled auto-merge (squash) April 2, 2026 09:58
@achyutjhunjhunwala
Copy link
Copy Markdown
Contributor Author

/ralph check the test failures of build 421192 and fix them

Build URL: https://buildkite.com/elastic/kibana-pull-request/builds/421192

Use ci/bk-build-info.sh 421192 to fetch detailed build failure information.

@achyutjhunjhunwala
Copy link
Copy Markdown
Contributor Author

Investigated the test failures in build #421192. All 3 failures are not caused by this PR:

  1. LLM Evals (2 steps — eis-anthropic-claude-4-6-sonnet and eis-openai-gpt-oss-120b): Pre-existing bug in kbn-scout-reporting — the getKibanaModulePath function in src/platform/packages/private/kbn-scout-reporting/src/helpers/read_manifest.ts requires a scout or scout_* directory in the playwright config path, but evals packages place playwright.config.ts at the package root. This was introduced by commit 8741ad7e6140 merged to main and affects all evals suites, not just significant-events.

  2. FTR Configs Issue #60 #63: Unrelated flaky UI test ("search features index details page").

Previous builds #421000 and #420993 passed (with flaky retries), confirming these are not PR-related. No action needed in this PR.

Written by Ralph, wait for @achyutjhunjhunwala to chime in

Comment thread x-pack/platform/plugins/shared/streams/server/feature_flags.ts
Copy link
Copy Markdown

@leathekd leathekd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Schema LGTM

Copy link
Copy Markdown
Contributor

@mattkime mattkime left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

codeowner changes lgtm

@achyutjhunjhunwala achyutjhunjhunwala merged commit be7d65d into elastic:main Apr 2, 2026
21 checks passed
@elasticmachine
Copy link
Copy Markdown
Contributor

💛 Build succeeded, but was flaky

Failed CI Steps

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
streamsApp 1782 1739 -43

Public APIs missing comments

Total count of every public API that lacks a comment. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats comments for more detailed information.

id before after diff
@kbn/management-settings-ids 148 149 +1

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
streamsApp 1.9MB 1.8MB -11.5KB

Public APIs missing exports

Total count of every type that is part of your API that should be exported but is not. This will cause broken links in the API documentation system. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats exports for more detailed information.

id before after diff
streams 30 29 -1
Unknown metric groups

API count

id before after diff
@kbn/management-settings-ids 149 150 +1

History

cc @achyutjhunjhunwala

@achyutjhunjhunwala achyutjhunjhunwala deleted the use-model-settings-inside-sig-event branch April 2, 2026 14:48
flash1293 added a commit to flash1293/kibana that referenced this pull request Apr 2, 2026
Resolve merge conflicts from upstream PR elastic#260741 which removed
model_settings_config saved objects. Migrate the useMemory setting
to a uiSetting (OBSERVABILITY_STREAMS_ENABLE_MEMORY), aligning
with upstream's pattern. Replace all modelSettingsClient.getSettings()
calls with uiSettingsClient.get() and remove connectorIdDiscovery
references (now resolved via default AI connector).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@flash1293 flash1293 mentioned this pull request Apr 2, 2026
18 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport:skip This PR does not require backporting evals:significant-events Run the significant-events @kbn/evals Feature:SigEvents Significant events feature, related to streams and rules/alerts (RnA) models:eis/anthropic-claude-4.6-opus Run LLM evals against model: eis/anthropic-claude-4.6-opus models:eis/anthropic-claude-4.6-sonnet Run LLM evals against model: eis/anthropic-claude-4.6-sonnet models:eis/google-gemini-3.0-flash Run LLM evals against model: eis/google-gemini-3.0-flash models:eis/openai-gpt-5.2 Run LLM evals against model: eis/openai-gpt-5.2 models:eis/openai-gpt-oss-120b Run LLM evals against model: eis/openai-gpt-oss-120b models:judge:eis/google-gemini-3.1-pro Override LLM-as-a-judge connector for evals: eis/google-gemini-3.1-pro release_note:skip Skip the PR/issue when compiling release notes Team:SigEvents Project team working on Significant Events v9.4.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants