-
Notifications
You must be signed in to change notification settings - Fork 44
Issues: containers/ramalama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Local cuda container build fails with "unsupported instruction `vpdpbusd'"
#471
opened Nov 20, 2024 by
nzwulfin
Ramalama Container needs updating on the quay.io to use new llama-simple-chat
#458
opened Nov 15, 2024 by
bmahabirbu
Add podman serve --generate compose MODEL which would generate a docker-compose file for running AI Model Service.
good first issue
Good for newcomers
#184
opened Sep 24, 2024 by
rhatdan
Find a way to automatically build and push x86_64 and aarch64 images
#27
opened Aug 1, 2024 by
ericcurtin
Switch to https://github.com/abetlen/llama-cpp-python
#9
opened Jul 30, 2024 by
ericcurtin
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.