[Observability AI Assistant] Update README.md#163769
Conversation
🤖 GitHub commentsExpand to view the GitHub comments
Just comment with:
|
|
|
||
| Conversations with the AI Assistant are powered by three foundational components: the LLM (currently only OpenAI flavors), the knowledge base, and function calling. | ||
|
|
||
| The LLM essentially sits between the product and the user. Its purpose is to interpret both the messages from the user and the response from the functions called, and offer its conclusions and suggest next steps. It can suggest functions on its own, and it has read and write access to the knowledge base. |
There was a problem hiding this comment.
One somewhat random comment on The LLM essentially sits between the product and the user: A few days ago I was using GPT-4 to interpret the output of Linux CLI commands. The use case I was experimenting is akin to having a bash shell in which every time you execute a command you get the output enriched with analysis/commentary from the LLM. At one point I executed a ls command in a directory and then tried to execute a program that was in that directory according to the output of ls, but I couldn't. The executable wasn't there any more. I then lost 30 mins thinking I was going insane and had somehow deleted the executable, or run the ls command on a different host via ssh without realising or something. Anyway, long story short ... I was the victim of the LLM hallucinating and instead of just adding commentary when presenting me with the output of ls it was totally rewriting the output and making up files/directories. The reason I mention this is that we need to make sure we protect users from this happening. If the LLM is sitting in the middle, we need to make sure any time we're presenting the output of a function to the user we try to minimize the chances of a hallucination taking place (i.e. by including stuff like "product the output verbatim" in the prompt), and we also need to ensure that users have a way to get to the raw command output if they need to, so they can at least check if a hallucination has taken place.
There was a problem hiding this comment.
agreed, do you think allowing the user to inspect the function response (which is directly from the "system", and only then sent over to the LLM for interpretation) covers that?
There was a problem hiding this comment.
So think we want to have the default mode of operation be minimal friction, and I don't think it's feasible/desirable to have the user eye-balling the data going back and forth. I think the solution is likely:
- Ensure we are clear in our prompts about how the LLM is supposed to handle the input data, and expectations around echo'ing content verbatim if that's what we want.
- Instead of having the user sit in the middle and eyeball data each time it's sent back and forth, give the user the ability to see the raw data, function calls etc, after the fact if they want to. e.g. via 'More info' drop down on the chat message it relates to, or something similar.
|
sgtm. The prompt probably needs some tweaking, but I do think we take care
of the other stuff, e.g. the function response is collapsed by default but
the user can expand and inspect it if they want to.
…On Sat, Aug 12, 2023 at 8:05 PM Sean Heelan ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In x-pack/plugins/observability_ai_assistant/README.md
<#163769 (comment)>:
> +
+The user can then also continue the conversation in a flyout by clicking "Start chat".
+
+- **1.2.2. Action Menu Button**
+
+All Observability apps also have a button in the top action menu, to open the AI Assistant and start a conversation.
+
+- **1.2.3. Standalone page**
+
+Users can also access existing conversations and create a new one by navigating to `/app/observabilityAIAssistant/conversations/new`. They can also find this link in the search bar.
+
+#### **1.3. Chat**
+
+Conversations with the AI Assistant are powered by three foundational components: the LLM (currently only OpenAI flavors), the knowledge base, and function calling.
+
+The LLM essentially sits between the product and the user. Its purpose is to interpret both the messages from the user and the response from the functions called, and offer its conclusions and suggest next steps. It can suggest functions on its own, and it has read and write access to the knowledge base.
So think we want to have the default mode of operation be minimal
friction, and I don't think it's feasible/desirable to have the user
eye-balling the data going back and forth. I think the solution is likely:
1. Ensure we are clear in our prompts about how the LLM is supposed to
handle the input data, and expectations around echo'ing content verbatim if
that's what we want.
2. Instead of having the user sit in the middle and eyeball data each
time it's sent back and forth, give the user the ability to see the raw
data, function calls etc, after the fact if they want to. e.g. via 'More
info' drop down on the chat message it relates to, or something similar.
—
Reply to this email directly, view it on GitHub
<#163769 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACWDXFZ7AGXM5EQZ535KH3XU7AXNANCNFSM6AAAAAA3OB2XJI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
CoenWarmer
left a comment
There was a problem hiding this comment.
Great write up. Two small typos.
Co-authored-by: Coen Warmer <coen.warmer@gmail.com>
Co-authored-by: Coen Warmer <coen.warmer@gmail.com>
|
@CoenWarmer thx, addressed (will dismiss your review and merge) |
💚 Build Succeeded
Metrics [docs]
History
To update your PR or re-run it, just comment with: |
Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com> Co-authored-by: Coen Warmer <coen.warmer@gmail.com>
Adds a README file for the Observability AI Assistant (or rather adds some actual content)