[SigEvents] Fix model order issue [BLOCKED]#264050
[SigEvents] Fix model order issue [BLOCKED]#264050achyutjhunjhunwala wants to merge 13 commits intoelastic:mainfrom
Conversation
ApprovabilityVerdict: Needs human review The PR is explicitly marked as '[BLOCKED]' in the title, indicating it's not ready for merge. Additionally, it changes connector resolution behavior affecting model selection across multiple features, and the author does not own any of the changed files which are all owned by @elastic/obs-onboarding-team or @elastic/obs-sig-events-team. You can customize Macroscope's approvability policy. Learn more. |
ruflin
left a comment
There was a problem hiding this comment.
Did a manual test and works as expected.
|
/ci-ralph can you get me a green build. In case you see build failures are not because of my changes, please merge upstream to retrigger the build |
|
/ralph check the test failures of build 430853 and fix them Build URL: https://buildkite.com/elastic/kibana-pull-request/builds/430853 Use |
Increase lens bundle size limit from 86000 to 86500 to fix build failure caused by the merge with main (current size: 86083). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Build Fix: Lens Bundle Size LimitBuild #430853 failed because the lens plugin page load bundle size (86083 bytes) exceeded the configured limit (86000 bytes). This overage was introduced by the merge with `main`, not by the PR's own changes. Fix: Updated `packages/kbn-optimizer/limits.yml` to increase the lens bundle size limit from 86000 → 86500. Commit: f4b466f 🤖 Generated with Claude Code |
…ssue Resolves conflicts from main's PR elastic#263822 (KiGenerationProvider refactor): - packages/kbn-optimizer/limits.yml: took main's lens: 90000 (higher ceiling). - use_connector_config.ts: accepted main's deletion (state moved into KiGenerationProvider). - use_inference_feature_connectors.ts: kept our hook shape (connectors + resolvedConnectorId) with resolvedConnectorId = aiConnectors[0]?.id so the admin's global default wins over feature recommendedEndpoints when set, matching the alignment with the inference team. - streams_view.tsx: took main's useKiGeneration() destructure, dropped the useAIFeatures/allConnectors coupling, and derived connectorError / isConnectorCatalogUnavailable from the three per-step hooks. - knowledge_indicators_table.tsx: same per-step derivation; passes featuresConnectors/queriesConnectors (not allConnectors) into GenerateSplitButton. - ki_generation_context.tsx: extended ConnectorState to expose connectors and error from each per-step hook. Type check passes on both streams_app and streams projects.
|
🤖 Jobs for this PR can be triggered through checkboxes. 🚧
ℹ️ To trigger the CI, please tick the checkbox below 👇
|
|
Moving this PR back to Draft as there is some work which needs to be done |
💛 Build succeeded, but was flaky
Failed CI StepsMetrics [docs]Module Count
Async chunks
History
|
DO NOT MERGE THE PR
Summary
closes -
The sig-events per-step model dropdowns (KI Extraction / KI Queries / Discovery) were sourcing options from the wrong inference feature and pre-selecting the global GenAI default instead of the step's first recommended model. This PR scopes each split button to its own feature and — after review — stops hand-rolling any ordering or selection client-side, trusting the backend as the single source of truth.
How we got here
useGenAIConnectors.useLoadConnectorsstarted prependingGEN_AI_SETTINGS_DEFAULT_AI_CONNECTOR, pushing feature-recommended endpoints down.useGenAIConnectorstofeatureId: 'streams'— a parent withrecommendedEndpoints: [], so sig-events recs never surfaced.isRecommendedflag, and made the HTTP route prepend the global default.Fix
Per-step feature scoping (core fix)
STREAMS_SIG_EVENTS_KI_EXTRACTION_...,..._KI_QUERY_GENERATION_...,..._DISCOVERY_...) viauseInferenceFeatureConnectors(featureId).streams_view.tsxwires the three per-step lists intoGenerateSplitButton/InsightsSplitButton; the old sharedallConnectorsprop is retired.toInferenceConnectoradapter extracted intopublic/hooks/to_inference_connector.tsso the GenAI parent hook and the per-feature sig-events hook agree on one mapping.Trust the backend for ordering (no more client-side magic)
use_inference_feature_connectors.tsno longer sorts byisRecommendedand no longer hand-picks a "first recommended" value. It renders the backend response verbatim and setsresolvedConnectorId = aiConnectors[0]?.id.admin SO override → global default (if set) → recommendedEndpoints → rest of catalog, so our dropdown correctly shows the admin's global default on top when set, and the step's recommendeds on top when not.DEFAULT_ONLY=truehandling also falls out for free — the HTTP route collapses the response to just the default and we render it as-is.Scenarios
Running example for the columns below: KI Extraction. Registry recommendeds =
[GPT-OSS-120B, GPT-5.2, Sonnet 4.6]. Other connectors referenced:Claude Haiku,Gemini 2.5 Pro,GPT-4.1.Claude Haiku)[GPT-OSS-120B, GPT-5.2, Sonnet 4.6, Claude Haiku, …rest], badge onGPT-OSS-120BGPT-OSS-120B[Claude Haiku, GPT-OSS-120B, GPT-5.2, Sonnet 4.6, …rest], badge onClaude Haiku. Runner: picksClaude Haiku.Sonnet 4.6, registry rec #3)[GPT-OSS-120B, GPT-5.2, Sonnet 4.6, …rest], badge onGPT-OSS-120BGPT-OSS-120B[Sonnet 4.6, GPT-OSS-120B, GPT-5.2, …rest](Sonnet 4.6hoisted, deduped), badge onSonnet 4.6. Runner: picksSonnet 4.6.[GPT-OSS-120B, GPT-5.2, Sonnet 4.6, …rest], badge onGPT-OSS-120BGPT-OSS-120B[Gemini 2.5 Pro, GPT-4.1])[Gemini 2.5 Pro, GPT-4.1], badge onGemini 2.5 ProGemini 2.5 ProDEFAULT_ONLY=true, default set (e.g.Claude Haiku)[Claude Haiku]onlyGPT-OSS-120B(recommended #1; ignores "Disallow")Claude Haiku; throws when default is unset.Claude Haiku, SO=[Gemini 2.5 Pro, GPT-4.1])[Gemini 2.5 Pro, GPT-4.1], badge onGemini 2.5 ProGemini 2.5 ProDropdown rows 1, 2, 5 are made correct by this PR (client sort removed, backend ordering trusted). Task-runner rows 1, 2, 5 will be made correct by the inference-plugin PR once it merges.
How to test
yarn start(restart if Kibana was already running on this branch).Screenshot
Was BLOCKED by this PR
The PR was developed with Claude Code — claude-opus-4-7