-
Couldn't load subscription status.
- Fork 13.4k
Closed
Labels
enhancementNew feature or requestNew feature or requestgeneration qualityQuality of model outputQuality of model outputmodelModel specificModel specificstale
Description
@ggerganov Thanks for sharing llama.cpp. As usual, great work.
Question rather than issue. How difficult would it be to make ggml.c work for a Flan checkpoint, like T5-xl/UL2, then quantized?
Would love to be able to have those models run on a browser, much like what you did with whisper.cpp wasm.
Thanks again. (I can move this post somewhere else if you prefer since it's not technically about Llama. Just let me know where.)
zephyrzilla, av, jezzarax, Taytay, benoguz and 3 more
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestgeneration qualityQuality of model outputQuality of model outputmodelModel specificModel specificstale