v0.2.0b1 #546
Replies: 1 comment 1 reply
-
Will this in any way make it easier to target custom OpenAI v1 API endpoints, such as presented by ollama and LM studio? If you haven't seen LM studio, (windows approach) drop the installer into a Windows Sandbox (optional feature but available in all WIndows 10/11), install, run, point at a model, try the chat, tab on the left, try the server tab on the left, paste the example CURL command line. ollama is basically the os-image concept of docker recognizing the LM models are .. images. I'm personally more interested in trying GCP's Vertex models, especially bison. Using test chats via the GCP console with it produces some really exciting results for coding tasks, it blows bard out of the water in comparison. It initiates some wrong answers if you challenge it with predictable edge cases (domain-specific niches with unanswered questions on S/O or Quora) resulting in course corrections I've never seen in Bard or GP4: (paraphrasing) "Square the fox, except fox isn't a number, so lets assume I mean't the irrational number i". Feels like they perhaps trained it on transcripts of programming talks. |
Beta Was this translation helpful? Give feedback.
-
This is a beta release of v0.2.0.
Highlights
Thanks to all the reviewers for the v0.2 migration. Thanks to @aayushchhabra1999 @bonadio @marcgreen and other contributors!
What's Changed
New Contributors
Full Changelog: v0.1.14...v0.2.0b1
This discussion was created from the release v0.2.0b1.
Beta Was this translation helpful? Give feedback.
All reactions