Skip to content

Assist - OpenAI library port#25948

Merged
jakule merged 8 commits intomasterfrom
jakule/ai-lib-port
May 11, 2023
Merged

Assist - OpenAI library port#25948
jakule merged 8 commits intomasterfrom
jakule/ai-lib-port

Conversation

@jakule
Copy link
Copy Markdown
Contributor

@jakule jakule commented May 9, 2023

OpenAI client implementation used by Teleport Assist. It will be used by our new assistant to generate Linux commands end execute them on Teleport nodes.

Copy link
Copy Markdown
Collaborator

@zmb3 zmb3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there tests for any of this? The Complete method in particular has a lot of branching and concurrency and it would be nice to cover it with some test cases.

Comment thread lib/ai/chat.go Outdated
Comment thread lib/ai/chat.go Outdated
Comment thread lib/ai/chat.go Outdated
Comment thread lib/ai/chat.go Outdated
Copy link
Copy Markdown
Contributor

@camscale camscale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the Chat methods a little hard to review in depth because I don't know how they are called - there is nothing in this PR that calls them - the lack of context makes them harder to assess.

The use of switch for error handling seems non-idiomatic. I'm not sure why you are not just using if err != nil, etc in places. It made it harder for me to read the code as it was an unfamiliar way of what is normally very common error handling that I figured something special must be going on.

But it otherwise seems correct, so LGTM

Comment thread lib/ai/chat.go Outdated
Comment thread lib/ai/chat.go Outdated
Comment on lines 193 to 208
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you use just if/else if here instead of a switch, you don't need the label, and it's shorter:

if errors.Is(err, io.EOF) {
    break
} else if err != nil {
    return nil, numTokens, trace.Wrap(err)
}

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

Comment thread lib/ai/chat.go Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is going to block until the caller receives from the error channel and not get cancelled if the context is cancelled.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xacrimon Is this by design?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noted this during dev, thinking it was fine since the worker goroutine will cancel and return an error on cancellation itself. Now, in hindsight this isn't going to catch some oddball case of that goroutine panicking and not exiting via an error path so we should fix it by selecting on the cancellator here too.

Comment thread lib/ai/chat.go Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

returning an any without any documentation of what it can contain makes this a hard API to use. How is the caller meant to use that any response? I think at the very least that all possible types returned via this parameter should be documented.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

Comment thread lib/ai/chat.go Outdated
Comment thread lib/ai/chat.go Outdated
Comment on lines 193 to 208
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

Comment thread lib/ai/chat.go Outdated
Comment thread lib/ai/chat.go Outdated
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

Comment thread lib/ai/chat.go Outdated
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xacrimon Is this by design?

Comment thread lib/ai/chat_test.go Outdated
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@justinas Is this a bug or I'm using the API in the wrong way?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So with StreamingMessage, the tokens flow asynchronously (via a channel) to the caller of Complete, so numTokens is kind of useless.

processComplete() in the dev-ai branch already handles this correctly, but I'll modify the signature to make it less confusing.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #26058

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@justinas, I partially ported your PR for now.

@jakule
Copy link
Copy Markdown
Contributor Author

jakule commented May 11, 2023

Are there tests for any of this? The Complete method in particular has a lot of branching and concurrency and it would be nice to cover it with some test cases.

@zmb3 Test coverage added and all comments have been addressed.

@jakule jakule requested a review from zmb3 May 11, 2023 01:35
@jakule jakule force-pushed the jakule/ai-lib-port branch from f536b67 to e5529c9 Compare May 11, 2023 15:27
@jakule jakule requested review from justinas and xacrimon May 11, 2023 15:27
Comment thread lib/ai/chat.go
// Insert inserts a message into the conversation. This is commonly in the
// form of a user's input but may also take the form of a system messages used for instructions.
func (chat *Chat) Insert(role string, content string) Message {
chat.messages = append(chat.messages, openai.ChatCompletionMessage{
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not supposed to be accessed concurrently, is it?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. Single thread/goroutine.

Comment thread lib/ai/client.go
"github.com/tiktoken-go/tokenizer/codec"
)

type Client struct {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add godocs on everything?

Comment thread lib/ai/prompt.go

const initialAIResponse = `Hey, I'm Teleport - a powerful tool that can assist you in managing your Teleport cluster via ChatGPT.`

const promptExtractInstruction = `If the input is a request to complete a task on a server, try to extract the following information:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know anything about prompt engineering, but do we need to say "try to extract" instead of just "extract"?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also vote for "extract", but @xacrimon is the right person to ask.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recall changing it to "try" instead of a plain "extract" because sometimes it would complain audibly that it couldn't do the thing and it totally broke the immersionl.

Comment thread lib/ai/prompt.go Outdated
]
}

If the user is not asking to complete a task on a server directly but is asking a question related to Teleport or Linux - disgard this entire message and help them with their Teleport or Linux related request.`
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If the user is not asking to complete a task on a server directly but is asking a question related to Teleport or Linux - disgard this entire message and help them with their Teleport or Linux related request.`
If the user is not asking to complete a task on a server directly but is asking a question related to Teleport or Linux - disregard this entire message and help them with their Teleport or Linux related request.`

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed that back in dev-ai Then this was reverted back. Let me change that again and see what happens 😅

@public-teleport-github-review-bot public-teleport-github-review-bot Bot removed the request for review from probakowski May 11, 2023 16:07
Fix typo
@jakule jakule enabled auto-merge May 11, 2023 17:12
@jakule jakule added this pull request to the merge queue May 11, 2023
Merged via the queue into master with commit 60c01fa May 11, 2023
@jakule jakule deleted the jakule/ai-lib-port branch May 11, 2023 17:56
jakule added a commit that referenced this pull request May 11, 2023
* Assist - OpenAI library port

* Add tests

* Address code review comments

* Added comment

* Partial backport of #26058

* Move AI messages to a new file

* Prevent blocking on error

* Add comments.
Fix typo
jakule added a commit that referenced this pull request May 12, 2023
* Assist - OpenAI library port

* Add tests

* Address code review comments

* Added comment

* Partial backport of #26058

* Move AI messages to a new file

* Prevent blocking on error

* Add comments.
Fix typo
jakule added a commit that referenced this pull request Jun 2, 2023
* Assist - OpenAI library port

* Add tests

* Address code review comments

* Added comment

* Partial backport of #26058

* Move AI messages to a new file

* Prevent blocking on error

* Add comments.
Fix typo
jakule added a commit that referenced this pull request Jun 3, 2023
* Assist - API implementation (#25810)

* Assist - API implementation

* Add Assistant CRUD test

* Move assistant API to a separate file.

* Rename GRPC methods
Fix comments

* Address PR comments

* Update GRPC API - addressing code review comments

* Move assist API to a new GRPC service

* Add Assistant RBAC rules to this backport.

* Add missing license

* Update comment.

* Move username and ID out of AssistantMessage (#25964)

* Move username and ID out of AssistantMessage

#25810 added username and conversation ID to AssistantMessage. They aren't needed inside the message and can be moved out of it.
This PR also fixes ` grpc: error while marshaling: proto: Marshal called with nil` in `CreateAssistantMessage`.

Note: It's safe to change protobuf numbers as this code hasn't been released or backported yet.

* Fix test

* make grpc

* Assist - Configuration and usage (#25953)

* Assist - Configuration and usage

* Add network config test

* Add config test

* Run GCI

* Address review comments

* Assist - OpenAI library port (#25948)

* Assist - OpenAI library port

* Add tests

* Address code review comments

* Added comment

* Partial backport of #26058

* Move AI messages to a new file

* Prevent blocking on error

* Add comments.
Fix typo

* Assist - Execution web endpoint (#25955)

* Assist - Execution web endpoint

* Add test
Clean up code a bit

* Add missing username

* Address review comments

* Make more implementation shared between Terminal and Command Web Handlers

* Address review comments

* Address review comments

* Fixes after rebase
Add comments

* Add comments
Fix linter

* Add TELEPORT to Teleport related environment variable.

* Assist - Web endpoints (#26046)

* Assist - Web endpoints

* GCI

* proto rpc

* enforce endpoint checks

* disable in ui if disallowed by auth

* add comment

* unwrap stack

* Refactor code

---------

Co-authored-by: Joel Wejdenstål <jwejdenstal@icloud.com>

* [Assist] Fix random user selection (#26183)

Currently, after selecting a user for command execution in Teleport Assist the user can randomly change. This PR fixes this behavior.

* Add rate limiting to Assist (#26011)

* Add rate limiting to Assist

* Only rate limit Assist in Cloud

* Add a comment to assistantLimiter

* Fixes after rebase

* Add 'rate-limited' test case to assistant_test

* Handle CHAT_MESSAGE_UI in Assist web UI

* Add godoc

* CHAT_MESSAGE_UI -> CHAT_MESSAGE_ERROR

* Run assistant test cases in parallel

* Rename `ChatGPT` to `GPT-4` (#26272)

* Rename `ChatGPT` to `GPT4`

ChatGPT is a user friendly name, but is technically inaccurate.

* Apply suggestions from code review

Co-authored-by: Reed Loden <reed@goteleport.com>

* Lint fix

* Additional ChatGPT --> OpenAI GPT-4 fixes

---------

Co-authored-by: Reed Loden <reed@goteleport.com>

* Fix Assist rate-limiting in Cloud (#26342)

When Proxy is separate from Auth,
Proxy 'modules' will not contain meaningful data.
Instead, one must use ClusterFeatures fetched from the Auth server

* Assist UI improvements (#26365)

* Update assist warning wording, add link to ToS (#26396)

* Always render the portal for the assist title to go into (#26733)

* Stop using TokenPath for API key in Assist (#26671)

* [Assist] MFA support (#26719)

* Initial support for MFA in Assist

* UI webauth handler

* WebUI - WIP

* Run prettier

* Perform MFA ceremony only once.

* Cleanup JS

* Remove hacky WS logic

* Add cancel MFA logic

* [Assist] Prevent creating messages without conversation (#26797)

* Add parallel execution to assist. (#26563)

* Add parallel execution to assist.

* Extract execution logic to a new function.

* Add test

* Switch uber to std

* Address code review comments

* Fix display of Assist command executions with empty output (#27010)

* Fix display of Assist executions with empty output

* Lint

* Nit

* Improve display of commands that failed with code

* Address lint

* [Assist] Allow removing assist conversations (#26788)

* [Assist] Allow removing assist conversations

* Display landing page after the conversion is removed

* Improve styling and add a confirmation dialog

* Change the icon opacity to copy the main navigation

* Remove unused minus icon

* Add missing trace.wrap

---------

Co-authored-by: Ryan Clark <ryan.clark@goteleport.com>

* Backport fixes

* Regenerate go.sum to make the linter happy.

* Sync files after rebase

* After rebase fixes

---------

Co-authored-by: Joel Wejdenstål <jwejdenstal@icloud.com>
Co-authored-by: Justinas Stankevičius <justinas@users.noreply.github.com>
Co-authored-by: Mike Jensen <jentfoo@users.noreply.github.com>
Co-authored-by: Reed Loden <reed@goteleport.com>
Co-authored-by: Ryan Clark <ryan.clark@goteleport.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants