Releases: samestrin/llm-interface
Releases · samestrin/llm-interface
v2.0.14
v2.0.11
Skipped v2.0.10
- New LLM Providers: Anyscale, Bigmodel, Corcel, Deepseek, Hyperbee AI, Lamini, Neets AI, Novita AI, NVIDIA, Shuttle AI, TheB.AI, and Together AI.
- Caching: Supports multiple caches:
simple-cache
,flat-cache
, andcache-manager
.flat-cache
is now an optional package. - Logging: Improved logging with the
loglevel
. - Improved Documentation: Improved documentation with new examples, glossary, and provider details. Updated API key details, model alias breakdown, and usage information.
- More Examples: LangChain.js RAG, Mixture-of-Authorities (MoA), and more.
- Removed Dependency:
@anthropic-ai/sdk
is no longer required.
v2.0.9
- New LLM Providers: Added support for AIML API (currently not respecting option values), DeepSeek, Forefront, Ollama, Replicate, and Writer.
- New LLMInterface Methods:
LLMInterface.setApiKey
,LLMInterface.sendMesage
, andLLMInterface.streamMessage
. - Streaming: Streaming support available for: AI21 Studio, AIML API, DeepInfra, DeepSeek, Fireworks AI, FriendliAI, Groq, Hugging Face, LLaMa.CPP, Mistral AI, Monster API, NVIDIA,
Octo AI, Ollama, OpenAI, Perplexity, Together AI, and Writer. - New Interface Function:
LLMInterfaceStreamMessage
- Test Coverage: 100% test coverage for all interface classes.
- Examples: New usage examples.
v2.0.8
v2.0.7
- New LLM Providers: Added support for DeepInfra, FriendliAI, Monster API, Octo AI, Together AI, and NVIDIA.
- Improved Test Coverage: New DeepInfra, FriendliAI, Monster API, NVIDIA, Octo AI, Together AI, and watsonx.ai test cases.
- Refactor: Improved support for OpenAI compatible APIs using new BaseInterface class.
v2.0.6
v2.0.3
v1.0.1
v0.0.11
v0.0.10
- Hugging Face: Added support for new LLM provider Hugging Face (over 150,000 publicly accessible machine learning models)
- Perplexity: Added support for new LLM provider Perplexity
- AI21: Add support for new LLM provider AI21 Studio
- JSON Output Improvements: The json_object mode for OpenAI and Gemini now guarantees the return a valid JSON object or null.
- Graceful Retries: Retry LLM queries upon failure.