Conversation
zmb3
left a comment
There was a problem hiding this comment.
Are there tests for any of this? The Complete method in particular has a lot of branching and concurrency and it would be nice to cover it with some test cases.
camscale
left a comment
There was a problem hiding this comment.
I find the Chat methods a little hard to review in depth because I don't know how they are called - there is nothing in this PR that calls them - the lack of context makes them harder to assess.
The use of switch for error handling seems non-idiomatic. I'm not sure why you are not just using if err != nil, etc in places. It made it harder for me to read the code as it was an unfamiliar way of what is normally very common error handling that I figured something special must be going on.
But it otherwise seems correct, so LGTM
There was a problem hiding this comment.
If you use just if/else if here instead of a switch, you don't need the label, and it's shorter:
if errors.Is(err, io.EOF) {
break
} else if err != nil {
return nil, numTokens, trace.Wrap(err)
}
There was a problem hiding this comment.
this is going to block until the caller receives from the error channel and not get cancelled if the context is cancelled.
There was a problem hiding this comment.
I noted this during dev, thinking it was fine since the worker goroutine will cancel and return an error on cancellation itself. Now, in hindsight this isn't going to catch some oddball case of that goroutine panicking and not exiting via an error path so we should fix it by selecting on the cancellator here too.
There was a problem hiding this comment.
returning an any without any documentation of what it can contain makes this a hard API to use. How is the caller meant to use that any response? I think at the very least that all possible types returned via this parameter should be documented.
There was a problem hiding this comment.
@justinas Is this a bug or I'm using the API in the wrong way?
There was a problem hiding this comment.
So with StreamingMessage, the tokens flow asynchronously (via a channel) to the caller of Complete, so numTokens is kind of useless.
processComplete() in the dev-ai branch already handles this correctly, but I'll modify the signature to make it less confusing.
There was a problem hiding this comment.
@justinas, I partially ported your PR for now.
@zmb3 Test coverage added and all comments have been addressed. |
e8e737d to
c972ea1
Compare
f536b67 to
e5529c9
Compare
| // Insert inserts a message into the conversation. This is commonly in the | ||
| // form of a user's input but may also take the form of a system messages used for instructions. | ||
| func (chat *Chat) Insert(role string, content string) Message { | ||
| chat.messages = append(chat.messages, openai.ChatCompletionMessage{ |
There was a problem hiding this comment.
This is not supposed to be accessed concurrently, is it?
There was a problem hiding this comment.
Nope. Single thread/goroutine.
| "github.com/tiktoken-go/tokenizer/codec" | ||
| ) | ||
|
|
||
| type Client struct { |
There was a problem hiding this comment.
Can you add godocs on everything?
|
|
||
| const initialAIResponse = `Hey, I'm Teleport - a powerful tool that can assist you in managing your Teleport cluster via ChatGPT.` | ||
|
|
||
| const promptExtractInstruction = `If the input is a request to complete a task on a server, try to extract the following information: |
There was a problem hiding this comment.
I don't know anything about prompt engineering, but do we need to say "try to extract" instead of just "extract"?
There was a problem hiding this comment.
I also vote for "extract", but @xacrimon is the right person to ask.
There was a problem hiding this comment.
I recall changing it to "try" instead of a plain "extract" because sometimes it would complain audibly that it couldn't do the thing and it totally broke the immersionl.
| ] | ||
| } | ||
|
|
||
| If the user is not asking to complete a task on a server directly but is asking a question related to Teleport or Linux - disgard this entire message and help them with their Teleport or Linux related request.` |
There was a problem hiding this comment.
| If the user is not asking to complete a task on a server directly but is asking a question related to Teleport or Linux - disgard this entire message and help them with their Teleport or Linux related request.` | |
| If the user is not asking to complete a task on a server directly but is asking a question related to Teleport or Linux - disregard this entire message and help them with their Teleport or Linux related request.` |
There was a problem hiding this comment.
I fixed that back in dev-ai Then this was reverted back. Let me change that again and see what happens 😅
Fix typo
* Assist - OpenAI library port * Add tests * Address code review comments * Added comment * Partial backport of #26058 * Move AI messages to a new file * Prevent blocking on error * Add comments. Fix typo
* Assist - OpenAI library port * Add tests * Address code review comments * Added comment * Partial backport of #26058 * Move AI messages to a new file * Prevent blocking on error * Add comments. Fix typo
* Assist - API implementation (#25810) * Assist - API implementation * Add Assistant CRUD test * Move assistant API to a separate file. * Rename GRPC methods Fix comments * Address PR comments * Update GRPC API - addressing code review comments * Move assist API to a new GRPC service * Add Assistant RBAC rules to this backport. * Add missing license * Update comment. * Move username and ID out of AssistantMessage (#25964) * Move username and ID out of AssistantMessage #25810 added username and conversation ID to AssistantMessage. They aren't needed inside the message and can be moved out of it. This PR also fixes ` grpc: error while marshaling: proto: Marshal called with nil` in `CreateAssistantMessage`. Note: It's safe to change protobuf numbers as this code hasn't been released or backported yet. * Fix test * make grpc * Assist - Configuration and usage (#25953) * Assist - Configuration and usage * Add network config test * Add config test * Run GCI * Address review comments * Assist - OpenAI library port (#25948) * Assist - OpenAI library port * Add tests * Address code review comments * Added comment * Partial backport of #26058 * Move AI messages to a new file * Prevent blocking on error * Add comments. Fix typo * Assist - Execution web endpoint (#25955) * Assist - Execution web endpoint * Add test Clean up code a bit * Add missing username * Address review comments * Make more implementation shared between Terminal and Command Web Handlers * Address review comments * Address review comments * Fixes after rebase Add comments * Add comments Fix linter * Add TELEPORT to Teleport related environment variable. * Assist - Web endpoints (#26046) * Assist - Web endpoints * GCI * proto rpc * enforce endpoint checks * disable in ui if disallowed by auth * add comment * unwrap stack * Refactor code --------- Co-authored-by: Joel Wejdenstål <jwejdenstal@icloud.com> * [Assist] Fix random user selection (#26183) Currently, after selecting a user for command execution in Teleport Assist the user can randomly change. This PR fixes this behavior. * Add rate limiting to Assist (#26011) * Add rate limiting to Assist * Only rate limit Assist in Cloud * Add a comment to assistantLimiter * Fixes after rebase * Add 'rate-limited' test case to assistant_test * Handle CHAT_MESSAGE_UI in Assist web UI * Add godoc * CHAT_MESSAGE_UI -> CHAT_MESSAGE_ERROR * Run assistant test cases in parallel * Rename `ChatGPT` to `GPT-4` (#26272) * Rename `ChatGPT` to `GPT4` ChatGPT is a user friendly name, but is technically inaccurate. * Apply suggestions from code review Co-authored-by: Reed Loden <reed@goteleport.com> * Lint fix * Additional ChatGPT --> OpenAI GPT-4 fixes --------- Co-authored-by: Reed Loden <reed@goteleport.com> * Fix Assist rate-limiting in Cloud (#26342) When Proxy is separate from Auth, Proxy 'modules' will not contain meaningful data. Instead, one must use ClusterFeatures fetched from the Auth server * Assist UI improvements (#26365) * Update assist warning wording, add link to ToS (#26396) * Always render the portal for the assist title to go into (#26733) * Stop using TokenPath for API key in Assist (#26671) * [Assist] MFA support (#26719) * Initial support for MFA in Assist * UI webauth handler * WebUI - WIP * Run prettier * Perform MFA ceremony only once. * Cleanup JS * Remove hacky WS logic * Add cancel MFA logic * [Assist] Prevent creating messages without conversation (#26797) * Add parallel execution to assist. (#26563) * Add parallel execution to assist. * Extract execution logic to a new function. * Add test * Switch uber to std * Address code review comments * Fix display of Assist command executions with empty output (#27010) * Fix display of Assist executions with empty output * Lint * Nit * Improve display of commands that failed with code * Address lint * [Assist] Allow removing assist conversations (#26788) * [Assist] Allow removing assist conversations * Display landing page after the conversion is removed * Improve styling and add a confirmation dialog * Change the icon opacity to copy the main navigation * Remove unused minus icon * Add missing trace.wrap --------- Co-authored-by: Ryan Clark <ryan.clark@goteleport.com> * Backport fixes * Regenerate go.sum to make the linter happy. * Sync files after rebase * After rebase fixes --------- Co-authored-by: Joel Wejdenstål <jwejdenstal@icloud.com> Co-authored-by: Justinas Stankevičius <justinas@users.noreply.github.com> Co-authored-by: Mike Jensen <jentfoo@users.noreply.github.com> Co-authored-by: Reed Loden <reed@goteleport.com> Co-authored-by: Ryan Clark <ryan.clark@goteleport.com>
OpenAI client implementation used by Teleport Assist. It will be used by our new assistant to generate Linux commands end execute them on Teleport nodes.