From 4a24ad40506f53018fd0ab256b27276e1c3fdeb8 Mon Sep 17 00:00:00 2001 From: Yaroslav Yashin Date: Sat, 24 Aug 2024 23:40:04 +0200 Subject: [PATCH] Release notes updated Fixes for image handling. --- README.md | 14 ++++++++ messages.json | 3 +- messages/4.1.0.md | 82 +++++++++++++++++++++++++++++++++++++++++++++++ openai.py | 2 +- 4 files changed, 99 insertions(+), 2 deletions(-) create mode 100644 messages/4.1.0.md diff --git a/README.md b/README.md index 4430709..bbe9fba 100644 --- a/README.md +++ b/README.md @@ -9,6 +9,11 @@ It's not locked with just OpenAI anymore. [llama.cpp](https://github.com/ggergan ![](static/media/ai_chat_left.png) +> [!INFO] +> I think this plugin is its finite state. Meaning there's no further development of it I have in plans. I still have plans to fix bugs if any, but that tons of little enhancement that could be applied here to fix minor issues and roughness here and there likely never would. + +> What I do have in plans is to implement ST front end for [plandex](https://github.com/plandex-ai/plandex) tool based on some parts of this plugin codebase, to get (and to bring) a fancy and powerful agentish capabilities to ST ecosystem. So stay tuned. + ## Features - Code manipulation (append, insert and edit) selected code with OpenAI models. @@ -91,6 +96,15 @@ To send the whole file(s) in advance to request you should `super+button1` on th > [!NOTE] > It's also doesn't matter whether the file persists on a disc or it's just a virtual buffer with a text in it, if they're selected, their content will be send either way. +### Image handling + +Image handle can be called by `OpenAI: Handle Image` command. + +It expects an absolute path of image to be selected in a buffer on the command call (smth like `/Users/username/Documents/Project/image.png`). In addition command can be passed by input panel to proceed the image with special treatment. `png` and `jpg` images are only supported. + +> [!WARNING] +> Userflow don't expects that image url would be passed by that input panel input, it has to be selected in buffer. I'm aware about the UX quality of this design decision, but yet I'm too lazy to develop it further to some better state. + ### In buffer llm use case 1. You can pick one of the following modes: `append`, `replace`, `insert`. They're quite self-descriptive. They should be set up in assistant settings to take effect. diff --git a/messages.json b/messages.json index bd3c668..a21f0ba 100644 --- a/messages.json +++ b/messages.json @@ -21,5 +21,6 @@ "3.2.2": "messages/3.2.2.md", "3.2.3": "messages/3.2.3.md", "4.0.0": "messages/4.0.0.md", - "4.0.1": "messages/4.0.1.md" + "4.0.1": "messages/4.0.1.md", + "4.1.0": "messages/4.1.0.md" } diff --git a/messages/4.1.0.md b/messages/4.1.0.md new file mode 100644 index 0000000..2366100 --- /dev/null +++ b/messages/4.1.0.md @@ -0,0 +1,82 @@ +# 4.1.0 + +## Features + +- Image handle support added (model support required for that feature, currently I've tested it just with gpt-4o[-mini] models), can be called by `OpenAI: Handle Image` command. + +It expects an absolute path of image to be selected in a buffer on the command call (smth like `/Users/username/Documents/Project/image.png`). In addition command can be passed by input panel to proceed the image with special treatment. `png` and `jpg` images are only supported. + +> [!WARNING] +> Userflow don't expects that image url would be passed by that input panel input, it has to be selected. I'm aware about the UX quality of this feature, but yet I'm too lazy to develop it further in some better state. + + +> [!INFO] +> I think this plugin is its finite state. Meaning there's no further development of it I have in plans. I still have plans to fix bugs if any, but that tons of little enhancement that could be applied here to fix minor issues and roughness here and there likely never would. +> What I do have in plans is to implement ST front end for [plandex](https://github.com/plandex-ai/plandex) tool based on some parts of this plugin codebase, to get (and to bring) a fancy and powerful agentish capabilities to ST ecosystem. So stay tuned. + +## 4.0.0 Features + +Well, you know all of them: + +1. Dedicated history and assistant settings instances for projects. +2. Multiple files selection to send to a server as a context of a request. +3. Tokens [approximate] calculation. +4. Tab presentation of a chat and all the features coming along with it for free: + - a text search, + - symbols fast navigation panel + - a `super+s` chat saving saving + - view presentation setup options (see in Default Settings for details). +5. Ability to seamlessly mix different service providers within a single session, e.g. seamlessly switching from `gpt-4-turbo` to `llama-3-8b` running locally for a one request and back for another. + +## Claim + +This is a really huge update. So is you're happy with it, this is the place and time to [donate me](https://github.com/sponsors/yaroslavyaroslav). + +## Breaking changes + +`output.OpenAI Chat` renamed to `output.AI Chat`, please update your output panel binding to handle this change. + +## Details + +### Dedicated history and settings + +This is it, you can set up to treat a given project separately, thus give it an abilities of a dedicated chat history and settings/model. To make it even more exiting — you can use entirely different llm providers for a different projects. + +To set things up you, please follow the manual in [Readme](https://github.com/yaroslavyaroslav/OpenAI-sublime-text?tab=readme-ov-file#chat-history-management) + +### Multiple files selection + +Now you can send whole files to a server. UX for that is not that straightforward and I wanted to, but this is what we're working with 💁🏻‍♂️. + +So basically you have to make active all the tabs that you want to append to a request and run `[New Message|Chat Model] with Sheets` command. It looks quite wired with more than 4 tabs selected in the meantime, but it's still convenient in some way, means you're able to preview the content of each file to be sent beforehand. + +Pay attention, it should be exact Tab, and all of those tabs should be places within a single Group. + +To see detailed example, please follow the manual in [Readme](https://github.com/yaroslavyaroslav/OpenAI-sublime-text?tab=readme-ov-file#additional-request-context-management) + +### Tokens [approximate] calculation + +Finally! The author have studied calculus and now he's able to divide a number by 4. Such a science! + +Jokes aside, this implementation is quite raw, but it's still brings the knowledge of how much you've sent to a server already during the session, and to estimate how much you will send there with a very next request (hint: it's a left value). + +This feature comes really useful when you're working with expensive models like GPT-4, believe me, I've implemented this when I spent 40$ for a few days on it by blindingly selecting bunch of big files and sending them to a server. + +The tokens amounts shows automatically when `AI Chat` tab is active and hidden otherwise. + +### Tab representation of chat + +Output panel capabilities is limited in ST, it has lack of navigation within its content. So now you can switch to a tab, to get for free all the power that ordinary ST tab has. Full text search, heading navigation, one keystroke chat saving. + +To achieve that type "Open in Tab" in command palette and that's it. I strictly recommend you to use it in a separate view group (as it's on the main screenshot in repo) to get the best of it, but you're not forced to. + +There are new bunch of presentation settings have appeared because of this, to maximize the content space and to reduce all unnecessary stuff. In short you can toggle of lines number display and the whole gutter itself. For more details please look into default settings documentation. + +### Hot model with a chat switching + +It's as simple as powerful. You are able to change models and even services on the fly. You can start your chat with `gpt-4-turbo` model, then to switch to your local llama3 instance, then to whatever third-party service, than back to `gpt-4-turbo`. + +And you can achieve that in a straightforward and convenient way: simply by picking another predefined model from the `"OpenAI: Chat Model Select"` over and over again. + +> [!IMPORTANT] +> If have so much time to read that far, you certainly too reach and careless, thus have to [donate me](https://github.com/sponsors/yaroslavyaroslav). diff --git a/openai.py b/openai.py index 19c7e7f..55ed93d 100644 --- a/openai.py +++ b/openai.py @@ -86,7 +86,7 @@ def run(self, edit: Edit, **kwargs): elif mode == CommandMode.handle_image_input.value: _ = sublime.active_window().show_input_panel( - "Image URI: ", + "Command for Image: ", "", lambda user_input: self.on_input( region=region if region else None,