Releases: rahuldshetty/llm.js
Releases · rahuldshetty/llm.js
2.0.1
What's Changed
- Merge 2.0.1 release changes by @rahuldshetty in #13
- Structured LLM output with JSON Schema and GBNF Grammar Support for model type 'GGUF_CPU'
Full Changelog: 2.0.0...2.0.1
2.0.0
What's Changed
- Bump @babel/traverse from 7.22.8 to 7.23.4 by @dependabot in #2
- Enable Model Caching by @rahuldshetty in #4
- Use the Browser's Cache Storage if it's available, to avoid re-downloading models by @felladrin in #3
- Initial Release of 2.0.0 version by @rahuldshetty in #12
New Contributors
- @dependabot made their first contribution in #2
- @felladrin made their first contribution in #3
Full Changelog: 1.0.2...2.0.0
1.0.2
Added
- Latest GGUF format supported with latest updated llama2-cpp module.
- Added context_size parameter for llama models.
Changed
- Rebranded project to "LLM.JS".
- Removed STACK_SIZE flag from build scripts.
1.0.1
1.0.0
Added
- ggml.js package
- Added Model Support for Dollyv2, GPT2, GPT J, GPT Neo X, MPT, Replit, StarCoder
- docsify documentation