[Observability AI Assistant] Screen Context#175885
Conversation
🤖 GitHub commentsExpand to view the GitHub comments
Just comment with:
|
b3e75ca to
d05bfce
Compare
|
|
||
| const client = await service.getClient({ request }); | ||
|
|
||
| client.setChatContext(params.body.chatContext); |
There was a problem hiding this comment.
I don't think we should do this on the server (having side-effects). I also don't think we need it for chat, just /chat/complete. Let's make it a property of client.complete({ chatContext, ... })
There was a problem hiding this comment.
- Could you elaborate why this doesn't need to happen on the server when other functions do, from a conceptual standpoint? What is it about this that makes it doesn't need to happen on the server?
- How would that work when the context function now retrieves both knowledge base entries and chat context? The kb entry getting happens on the server, so how should it then get stuff from the client here?
Re: also adding it to /chat: wouldn't having that also unlock augmenting the contextual insights with context?
There was a problem hiding this comment.
Could you elaborate why this doesn't need to happen on the server when other functions do, from a conceptual standpoint? What is it about this that makes it doesn't need to happen on the server?
How would that work when the context function now retrieves both knowledge base entries and chat context? The kb entry getting happens on the server, so how should it then get stuff from the client here?
Sorry, I realize my wording is ambiguous. What I mean is that I don't think we should have side-effects on the server. The client is unique per request, but I still think it is unexpected for it to have side-effects like this. Maybe side-effects is the wrong term anyway, but what I mean is that it should be a property of /chat/complete. E.g. we don't use setMessages either.
Re: also adding it to /chat: wouldn't having that also unlock augmenting the contextual insights with context?
The contextual insights use /chat/complete.
dgieselaar
left a comment
There was a problem hiding this comment.
I've not finished going through all of this, so consider this early feedback - that I expect will amount to some significant changes.
| initialMessages: Message[]; | ||
| connectorId: string; | ||
| }) { | ||
| const service = useObservabilityAIAssistant(); |
There was a problem hiding this comment.
why the plugin service, and not the chat service?
There was a problem hiding this comment.
Because the chat service is not available until after the chat has started.
...k/plugins/observability_ai_assistant/public/components/action_menu_item/action_menu_item.tsx
Outdated
Show resolved
Hide resolved
|
|
||
| import { ObservabilityAIAssistantService } from '../types'; | ||
|
|
||
| export function useObservabilityAIAssistantChatContext(service: ObservabilityAIAssistantService) { |
There was a problem hiding this comment.
I think we need to run a function here on unmount to remove the context set by the consuming component.
| shareReplay() | ||
| ); | ||
| }, | ||
| complete({ chatContext, connectorId, conversationId, messages, persist, signal }) { |
There was a problem hiding this comment.
I assume this is just re-sorting, but it makes it harder to see the diff
| getLicense: () => licenseStart.license$, | ||
| getLicenseManagementLocator: () => shareStart, | ||
| setChatContext: (newChatContext: ChatContext) => { | ||
| chatContext = { |
There was a problem hiding this comment.
as discussed elsewhere, this just keeps adding context indefinitely
|
|
||
| const client = await service.getClient({ request }); | ||
|
|
||
| client.setChatContext(params.body.chatContext); |
There was a problem hiding this comment.
Could you elaborate why this doesn't need to happen on the server when other functions do, from a conceptual standpoint? What is it about this that makes it doesn't need to happen on the server?
How would that work when the context function now retrieves both knowledge base entries and chat context? The kb entry getting happens on the server, so how should it then get stuff from the client here?
Sorry, I realize my wording is ambiguous. What I mean is that I don't think we should have side-effects on the server. The client is unique per request, but I still think it is unexpected for it to have side-effects like this. Maybe side-effects is the wrong term anyway, but what I mean is that it should be a property of /chat/complete. E.g. we don't use setMessages either.
Re: also adding it to /chat: wouldn't having that also unlock augmenting the contextual insights with context?
The contextual insights use /chat/complete.
I'm surprised 20 files is too much to go through in one go. Wouldn't it make sense to finish it and get a complete picture? In any case, I replied to one of your remarks that I think is valuable here. FWIW, I also wanted to have it on the client and not the service but the client is only available after the chat is started. That would mean consumers would have to pass in the client to the hook which isn't great either. Let me know your thoughts. |
|
@CoenWarmer I have other things to do :) I'm doing this to give you quicker feedback so you'll be able to work on it. |
|
Pinging @elastic/obs-ux-management-team (Team:obs-ux-management) |
|
Pinging @elastic/obs-ux-infra_services-team (Team:obs-ux-infra_services) |
x-pack/plugins/observability_ai_assistant/server/functions/context.ts
Outdated
Show resolved
Hide resolved
|
Thanks for your work so far @CoenWarmer! Discussed with @sorenlouv over Zoom, I plan on making a couple of changes:
|
| const setApplicationContext = | ||
| useApmPluginContext().observabilityAIAssistant.service | ||
| .setApplicationContext; |
There was a problem hiding this comment.
Nit: with destructuring we might be able to get a one-liner.
| const setApplicationContext = | |
| useApmPluginContext().observabilityAIAssistant.service | |
| .setApplicationContext; | |
| const { setApplicationContext } = useApmPluginContext().observabilityAIAssistant.service; |
x-pack/plugins/apm/public/components/app/service_inventory/index.tsx
Outdated
Show resolved
Hide resolved
| }); | ||
| } | ||
|
|
||
| return setApplicationContext({ |
There was a problem hiding this comment.
Can we call this chatContext? It will be helpful for consumers to highlight that this is context for the AI Assistant - and not application context in general.
| return setApplicationContext({ | |
| return setChatContext({ |
Also: I think we to consider the different types of "context" and how we refer to them. Afaict we have:
- chat context supplied by consumers
- knowledge base articles
- data from connectors (third parties)
Is all of it context? Is all of it knowledge base?
There was a problem hiding this comment.
It was chatContext originally, but I'm thinking there are use cases without chat, e.g. on a one-off interaction with the LLM, like contextual insights, or generating an ES|QL query. I agree that applicationContext is not very clear. Maybe setAssistantContext?
There was a problem hiding this comment.
setAssistantContext is definitely better.
x-pack/plugins/observability/public/pages/slo_details/slo_details.tsx
Outdated
Show resolved
Hide resolved
x-pack/plugins/observability/public/pages/slos/components/slo_list.tsx
Outdated
Show resolved
Hide resolved
...k/plugins/observability_ai_assistant/public/components/action_menu_item/action_menu_item.tsx
Outdated
Show resolved
Hide resolved
| name: 'recall', | ||
| arguments: '{"queries":[],"categories":[]}', | ||
| name: 'context', | ||
| arguments: '{"learnings": { "queries":[],"categories":[]} }', |
There was a problem hiding this comment.
Why the nesting under learnings?
There was a problem hiding this comment.
oh, this is a mistake. will correct
| const screenDescription = appContexts.map((context) => context.description).join('\n\n'); | ||
|
|
||
| const nonEmptyQueries = compact(queries); | ||
| const content = { screen_description: screenDescription, learnings: [] }; |
There was a problem hiding this comment.
What about moving the dataWithinTokenLimit to the getContext function?
| const content = { screen_description: screenDescription, learnings: [] }; | |
| // any data that falls within the token limit, send it automatically | |
| const dataWithinTokenLimit = compact(appContexts.flatMap((context) => context.data)).filter( | |
| (data) => encode(JSON.stringify(data.value)).length <= MAX_TOKEN_COUNT_FOR_DATA_ON_SCREEN | |
| ); | |
| const content = { | |
| screen_description: screenDescription, | |
| data_on_screen: dataWithinTokenLimit, | |
| learnings: [], | |
| }; |
| // any data that falls within the token limit, send it automatically | ||
|
|
||
| const dataWithinTokenLimit = compact( | ||
| appContexts.flatMap((context) => context.data) | ||
| ).filter( | ||
| (data) => | ||
| encode(JSON.stringify(data.value)).length <= MAX_TOKEN_COUNT_FOR_DATA_ON_SCREEN | ||
| ); |
There was a problem hiding this comment.
Why is this not part of the getContext function?
There was a problem hiding this comment.
It wasn't part of it because I originally implemented it as separate messages (get_data_on_screen function calls). But I was worried that that would cause the LLM to no longer call get_data_on_screen, so I removed it again. I've moved it to the getContext function.
x-pack/plugins/observability_ai_assistant/server/functions/query/index.ts
Show resolved
Hide resolved
| }); | ||
|
|
||
| const response$ = await client.complete({ | ||
| const response$ = client.complete({ |
There was a problem hiding this comment.
Thanks! This dangling await has been triggering severe OCD!
There was a problem hiding this comment.
😄 @miltonhultgren was complaining about it as well
| if (allData.length) { | ||
| this.registerFunction( | ||
| { | ||
| name: 'get_data_on_screen', |
There was a problem hiding this comment.
I want to make sure I understand this correctly: The LLM can now get the chat context in two ways:
- calling the
contextfunction will return the chat context (possibly truncated due to token limit) or - calling the
get_data_on_screenfunction which will return the untruncated chat context
There was a problem hiding this comment.
Btw. it looks like get_data_on_screen doesn't include the appContexts.description (aka "screenDescription"). Is that intentional? Afaict the context for the SLOList only has screen description and no data attribute so that would result in no context being provided here
There was a problem hiding this comment.
Yes, the descriptions are automatically added in the context function call, so there is no need for the LLM to request it.
| `); | ||
|
|
||
| return setApplicationContext({ | ||
| description, |
There was a problem hiding this comment.
nit: should we call the top level description for screenDescription to indicate that it should describe what the user is looking at, whereas the description under data is to describe what the data entities are.
| description, | |
| screenDescription, |
sorenlouv
left a comment
There was a problem hiding this comment.
Awesome improvement! Excited to see the impact of this!
💛 Build succeeded, but was flaky
Failed CI StepsMetrics [docs]Module Count
Public APIs missing comments
Async chunks
Public APIs missing exports
Page load bundle
History
To update your PR or re-run it, just comment with: |
|
|
||
| ${ | ||
| alertDetail.formatted.reason | ||
| ? `The reason given for the alert is ${alertDetail.formatted.reason}.` |
There was a problem hiding this comment.
Not needed, everything we sent to the LLM needs to be in English.
| return; | ||
| } | ||
|
|
||
| const screenDescription = dedent(`The user is looking at an ${ |
There was a problem hiding this comment.
@shahzad31 We always use English when communicating with the LLM (think of this as an API call where you wouldn't translate the params either)
| The user is looking at the detail page for the following SLO | ||
|
|
||
| Name: ${slo.name}. | ||
| Id: ${slo.id} |
There was a problem hiding this comment.
i guess we also need to slo.instanceId if it's not equal to *
There was a problem hiding this comment.
it's available in the metadata below
There was a problem hiding this comment.
What metadata are you referring to?
There was a problem hiding this comment.
I wonder if using the actual doc fields slo.name, slo.id etc, would help the LLM be able to match the data to parameters for functions when those parameters use the field name.
There was a problem hiding this comment.
the data prop - that's also sent over to the LLM
There was a problem hiding this comment.
re: your second question: yes, that's useful to the LLM, but also available in the data prop
There was a problem hiding this comment.
@dominiqueclarke sorry I spoke to soon, I have not actually verified whether field names are available in the metadata. Where are the field names used (outside of SOs)?
dominiqueclarke
left a comment
There was a problem hiding this comment.
LGTM just wondering about the instanceId value as Shahzad has mentioned but SLO team can always refine the data provided on the pages we own.
## Summary This adds functionality to allow consumers of the AI Assistant for Observability to add context to the LLM conversation, at the start of a conversation and contextual after every prompt. https://github.com/elastic/kibana/assets/535564/b4d62897-d701-4c23-b90b-464cad21e9d0  ## How to use The service now exposes a `setApplicationContext` function, that returns a hook to unregister the context. Here's an example: Consumers can use this to add context relevant to the route or settings of Kibana that might be relevant for the LLM within that conversation. Example: ```ts useEffect(() => { return setApplicationContext({ data: [ { name: 'top_transactions', description: 'The visible transaction groups', value: mainStatistics.transactionGroups.map((group) => { return { name: group.name, alertsCount: group.alertsCount, }; }), }, ], }); }, [setApplicationContext, mainStatistics]); ``` By default the URL that the user is currently on is always included in the context so the Assistant will always take that into account. ## Details for reviewers - `recall` function has been renamed to `context` - `context` function now returns both Knowledge base entries as well as chat context that is set by `setApplicationContext`.elastic#176357 - part of the function logic was moved from the ObservabilityAIAssistantService into the ChatFunctionClient, for easier testing --------- Co-authored-by: Dario Gieselaar <dario.gieselaar@elastic.co> Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
## Summary This adds functionality to allow consumers of the AI Assistant for Observability to add context to the LLM conversation, at the start of a conversation and contextual after every prompt. https://github.com/elastic/kibana/assets/535564/b4d62897-d701-4c23-b90b-464cad21e9d0  ## How to use The service now exposes a `setApplicationContext` function, that returns a hook to unregister the context. Here's an example: Consumers can use this to add context relevant to the route or settings of Kibana that might be relevant for the LLM within that conversation. Example: ```ts useEffect(() => { return setApplicationContext({ data: [ { name: 'top_transactions', description: 'The visible transaction groups', value: mainStatistics.transactionGroups.map((group) => { return { name: group.name, alertsCount: group.alertsCount, }; }), }, ], }); }, [setApplicationContext, mainStatistics]); ``` By default the URL that the user is currently on is always included in the context so the Assistant will always take that into account. ## Details for reviewers - `recall` function has been renamed to `context` - `context` function now returns both Knowledge base entries as well as chat context that is set by `setApplicationContext`.elastic#176357 - part of the function logic was moved from the ObservabilityAIAssistantService into the ChatFunctionClient, for easier testing --------- Co-authored-by: Dario Gieselaar <dario.gieselaar@elastic.co> Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Summary
This adds functionality to allow consumers of the AI Assistant for Observability to add context to the LLM conversation, at the start of a conversation and contextual after every prompt.
Screen.Recording.2024-01-30.at.14.37.41.mov
How to use
The service now exposes a
setApplicationContextfunction, that returns a hook to unregister the context. Here's an example:Consumers can use this to add context relevant to the route or settings of Kibana that might be relevant for the LLM within that conversation.
Example:
By default the URL that the user is currently on is always included in the context so the Assistant will always take that into account.
Details for reviewers
recallfunction has been renamed tocontextcontextfunction now returns both Knowledge base entries as well as chat context that is set bysetApplicationContext.[Security Solution] GenAI API Integration Tests #176357