4.2.0 #61
yaroslavyaroslav
announced in
Announcements
4.2.0
#61
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Features
phantom
stream
toggle for responses brought backDeprecated
append
,replace
,insert
in prompt modes is deprecated and will be removed in 5.0 release.mode: chat_completion
attribute of plugin commands (i.e."command": "openai", "args": { "mode": "chat_completion" }
), as it's actually the only mode to communicate with the llm. The rest of the modes (e.g.handle_image_input
,reset_chat_history
,refresh_output_panel
,create_new_tab
) are preserved.Detaied description
Phantom mode
Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.
"prompt_mode": "phantom"
for AI assistant in its settings.OpenAI: New Message
orOpenAI: Chat Model Select
and ask whatever you'd like in popup input pane.ctrl+c
to stop prompting same as with inpanel
mode.Stream toggle
You can toggle streaming behavior of a model response with
"stream": false
setting in per assistant basis. That's pretty much it, the default value istrue
.Images handling UX improved
Images paths can now be fetched from the clipboard in addition to be extracted from the selection in a given view. It could be either a single image path [and nothing more than that] or a list of such paths separated with a new line, e.g.
/Users/username/Documents/Project/image0.png\n/Users/username/Documents/Project/image1.png
.Please note the parser that is trying to deduct whether the content of your clipboard is an [list of] image[s] is made by AI and quite fragile, so don't expect too much from it.
Advertisement logic improvement
Advertisements now appear only when users excessively utilize the plugin, such as by processing a lot tokens or sending/receiving an excessive number of messages.
Full Changelog: 4.1.0...4.2.0
This discussion was created from the release 4.2.0.
Beta Was this translation helpful? Give feedback.
All reactions