Skip to content

Conversation

@DiegoCao
Copy link
Member

@DiegoCao DiegoCao commented Apr 1, 2024

Add AppConfig.useIndexedDBCache to optionally use IndexedDBCache rather than the default Cache API.

Also add examples/cache-usage to demonstrate the usage of the two caches and cache utils such as deleting a model from cache.

@DiegoCao DiegoCao mentioned this pull request Apr 1, 2024
@CharlieFRuan
Copy link
Member

Thank you! I will review soon today/tomorrow.

@CharlieFRuan CharlieFRuan changed the title Support IndexDB on webllm [Cache] Support IndexedDB, add useIndexedDBCache in AppConfig Apr 6, 2024
@CharlieFRuan
Copy link
Member

Depends on apache/tvm#16733 to be merged

@CharlieFRuan CharlieFRuan merged commit 16d5e00 into mlc-ai:main Apr 10, 2024
CharlieFRuan added a commit that referenced this pull request Apr 10, 2024
### Changes
- Enable `webllm.AppConfig.useIndexedDBCache` so that user can choose
between the Cache API and IndexedDB Cache
  - #352
- Refactor `ChatModule` to `Engine`, finalize OpenAI API to
`engine.chat.completions.create()` to match that of OpenAI
  - #361
- Add deprecation warning of `engine.generate()`, suggesting user to use
`engine.chat.completions.create()`
- Subsequently, support implicit KVCache reuse for multi-round chat
usage
- Though `chat.completions.create()` is functional, hence requiring
users to maintain all chat history, we will reuse KVs if we detect
multi-round chat, hence not sacrificing any performance
- See #359 and
`examples/multi-round-chat`

### WASM Version

WASMs are still `v0_2_30` as no compile-time changes are required.

### tvmjs
Updated tvmjs to:
-
mlc-ai/relax@c7bdcab
- i.e.
apache/tvm@d1e24ca
atebites-hub pushed a commit to atebites-hub/web-llm that referenced this pull request Oct 4, 2025
…#352)

Add `AppConfig.useIndexedDBCache` to optionally use IndexedDBCache
rather than the default Cache API.

Also add `examples/cache-usage` to demonstrate the usage of the two
caches and cache utils such as deleting a model from cache.

---------

Co-authored-by: Charlie Ruan <[email protected]>
atebites-hub pushed a commit to atebites-hub/web-llm that referenced this pull request Oct 4, 2025
### Changes
- Enable `webllm.AppConfig.useIndexedDBCache` so that user can choose
between the Cache API and IndexedDB Cache
  - mlc-ai#352
- Refactor `ChatModule` to `Engine`, finalize OpenAI API to
`engine.chat.completions.create()` to match that of OpenAI
  - mlc-ai#361
- Add deprecation warning of `engine.generate()`, suggesting user to use
`engine.chat.completions.create()`
- Subsequently, support implicit KVCache reuse for multi-round chat
usage
- Though `chat.completions.create()` is functional, hence requiring
users to maintain all chat history, we will reuse KVs if we detect
multi-round chat, hence not sacrificing any performance
- See mlc-ai#359 and
`examples/multi-round-chat`

### WASM Version

WASMs are still `v0_2_30` as no compile-time changes are required.

### tvmjs
Updated tvmjs to:
-
mlc-ai/relax@c7bdcab
- i.e.
apache/tvm@d1e24ca
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants