Skip to content

Conversation

@CharlieFRuan
Copy link
Member

Changes

WASM Version

WASMs are still v0_2_30 as no compile-time changes are required.

tvmjs

Updated tvmjs to:

@CharlieFRuan CharlieFRuan merged commit 5e3c1ed into mlc-ai:main Apr 10, 2024
atebites-hub pushed a commit to atebites-hub/web-llm that referenced this pull request Oct 4, 2025
### Changes
- Enable `webllm.AppConfig.useIndexedDBCache` so that user can choose
between the Cache API and IndexedDB Cache
  - mlc-ai#352
- Refactor `ChatModule` to `Engine`, finalize OpenAI API to
`engine.chat.completions.create()` to match that of OpenAI
  - mlc-ai#361
- Add deprecation warning of `engine.generate()`, suggesting user to use
`engine.chat.completions.create()`
- Subsequently, support implicit KVCache reuse for multi-round chat
usage
- Though `chat.completions.create()` is functional, hence requiring
users to maintain all chat history, we will reuse KVs if we detect
multi-round chat, hence not sacrificing any performance
- See mlc-ai#359 and
`examples/multi-round-chat`

### WASM Version

WASMs are still `v0_2_30` as no compile-time changes are required.

### tvmjs
Updated tvmjs to:
-
mlc-ai/relax@c7bdcab
- i.e.
apache/tvm@d1e24ca
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant