Download of model is too slow #160
Replies: 1 comment
-
The browser loaded/hosted models are fairly experimental and only works in Chrome and Edge. Typically people choose to host a model locally on their machine itself with a tool like LM Studio which will let you choose any number of models to download and host with an API available locally that can be used to connect and communicate with the model. Here's the docs to get a local model and server running, it's pretty painless https://lmstudio.ai/docs/local-server#using-the-local-server . The only adjustment needed is to enable the Cross-Origin-Resource-Sharing (CORS) option for the server. Once the server is running you just enter the local server url into the custom config section like And select the model and enter the max token value (1024 is a good start) and it's good to go. |
Beta Was this translation helpful? Give feedback.
-
Currently waiting up to 30 minutes for a model to load, is there a way to just download it on to a USB drive and plug it in?
Beta Was this translation helpful? Give feedback.
All reactions