Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could we request support for a smallish (~4-5B param) modern vision LLM? LLava-1.6 or Nanollava? #988

Open
kinchahoy opened this issue Aug 1, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@kinchahoy
Copy link

🚀 The feature, motivation and pitch

Having good basic pytorch support for inferencing LLMs is key to continued success of pytorch. Vision LLM models tend to have uneven support on mainstream inferencing engines like Llama.cpp due to the need to reimplement CLIP/SIGLIP etc. Pytorch could natively support performant vision LLMs with quantization on ARM devices, which would make a big difference in usability.

Alternatives

No response

Additional context

No response

RFC (Optional)

No response

@kinchahoy kinchahoy changed the title Could we request support for a smallish (~4-5B param) modern vision LLM? LLava-1.6 or Nano? Could we request support for a smallish (~4-5B param) modern vision LLM? LLava-1.6 or Nanollava? Aug 1, 2024
@byjlw
Copy link
Contributor

byjlw commented Aug 1, 2024

We are currently working on llava support. Getting close! @Gasoonjia @larryliu0820

@Jack-Khuu Jack-Khuu added the enhancement New feature or request label Aug 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants