Featues
- New in buffer mode
phantom
stream
toggle for responses brought back- images handling UX improved
- advertisement logic improved
Deprecated
append
,replace
,insert
in prompt modes is deprecated and will be removed in 5.0 release.mode: chat_completion
attribute of plugin commands (i.e."command": "openai", "args": { "mode": "chat_completion" }
), as it's actually the only mode to communicate with the llm. The rest of the modes (e.g.handle_image_input
,reset_chat_history
,refresh_output_panel
,create_new_tab
) are preserved.
Detaied description
Phantom mode
Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.
- You can set
"prompt_mode": "phantom"
for AI assistant in its settings. - [optional] Select some text to pass in context in to manipulate with.
- Hit
OpenAI: New Message
orOpenAI: Chat Model Select
and ask whatever you'd like in popup input pane. - Phantom will appear below the cursor position or the beginning of the selection while the streaming LLM answer occurs.
- You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands.
- You can hit
ctrl+c
to stop prompting same as with inpanel
mode.
Stream toggle
You can toggle streaming behavior of a model response with "stream": false
setting in per assistant basis. That's pretty much it, the default value is true
.
Images handling UX improved
Images paths can now be fetched from the clipboard in addition to be extracted from the selection in a given view. It could be either a single image path [and nothing more than that] or a list of such paths separated with a new line, e.g. /Users/username/Documents/Project/image0.png\n/Users/username/Documents/Project/image1.png
.
Please note the parser that is trying to deduct whether the content of your clipboard is an [list of] image[s] is made by AI and quite fragile, so don't expect too much from it.
Advertisement logic improvement
Advertisements now appear only when users excessively utilize the plugin, such as by processing too many tokens or sending/receiving an excessive number of messages.
Full Changelog: 4.1.0...4.2.0