Skip to content

0.20

Latest
Compare
Choose a tag to compare
@simonw simonw released this 23 Jan 04:47
· 4 commits to main since this release
  • New model, o1. This model does not yet support streaming. #676
  • o1-preview and o1-mini models now support streaming.
  • New models, gpt-4o-audio-preview and gpt-4o-mini-audio-preview. #677
  • llm prompt -x/--extract option, which returns just the content of the first fenced code block in the response. Try llm prompt -x 'Python function to reverse a string'. #681
    • Creating a template using llm ... --save x now supports the -x/--extract option, which is saved to the template. YAML templates can set this option using extract: true.
    • New llm logs -x/--extract option extracts the first fenced code block from matching logged responses.
  • New llm models -q 'search' option returning models that case-insensitively match the search query. #700
  • Installation documentation now also includes uv. Thanks, Ariel Marcus. #690 and #702
  • llm models command now shows the current default model at the bottom of the listing. Thanks, Amjith Ramanujam. #688
  • Plugin directory now includes llm-venice, llm-bedrock, llm-deepseek and llm-cmd-comp.
  • Fixed bug where some dependency version combinations could cause a Client.__init__() got an unexpected keyword argument 'proxies' error. #709
  • OpenAI embedding models are now available using their full names of text-embedding-ada-002, text-embedding-3-small and text-embedding-3-large - the previous names are still supported as aliases. Thanks, web-sst. #654