Releases: containers/ramalama
Releases · containers/ramalama
v0.1.1
v0.1.0
What's Changed
- We can now run models via Kompute in podman-machine by @ericcurtin in #440
- Only do dnf install for cuda images by @ericcurtin in #441
- Add --host=0.0.0.0 if running llama.cpp serve within a container by @rhatdan in #444
- Document the host flag in ramalama.conf file by @rhatdan in #447
- Add granite-8b to shortnames.conf by @rhatdan in #448
- Fix RamaLama container image build by @ericcurtin in #446
- Bump to v0.1.0 by @rhatdan in #449
Full Changelog: v0.0.23...v0.1.0
v0.0.23
What's Changed
- Remove omlmd as a dependency by @ericcurtin in #428
- Check versions match in CI by @ericcurtin in #427
- Fix podman run oci://... by @rhatdan in #429
- Attempt to remove OCI Image if removing as Ollama or Huggingface fails by @rhatdan in #432
- Run does not have generate, so remove it by @rhatdan in #434
- Run the command by default without stderr by @rhatdan in #436
- Closing stderr on podman command is blocking progress information and… by @rhatdan in #438
- Make it easier to test-run manually by @rhatdan in #435
- Install llama-cpp-python[server] by @ericcurtin in #430
Full Changelog: v0.0.22...v0.0.23
v0.0.22
What's Changed
- Bump to v0.0.21 by @rhatdan in #410
- Update ggerganov/whisper.cpp digest to 0377596 by @renovate in #409
- Use subpath for OCI Models by @rhatdan in #411
- Consistency changes by @ericcurtin in #408
- Split out kube.py from model.py by @rhatdan in #412
- Fix mounting of Ollama AI Images into containers. by @rhatdan in #414
- Start an Asahi version by @ericcurtin in #369
- Generate MODEL.yaml file locally rather then just to stdout by @rhatdan in #416
- Bugfix comma by @ericcurtin in #421
- Fix nocontainer mode by @rhatdan in #419
- Update ggerganov/whisper.cpp digest to 31aea56 by @renovate in #425
- Add --generate quadlet/kube to create quadlet and kube.yaml by @rhatdan in #423
- Allow default port to be specified in ramalama.conf file by @rhatdan in #424
- Made run and serve consistent with model exec path. Fixes issue #413 by @bmahabirbu in #426
- Bump to v0.0.22 by @rhatdan in #415
Full Changelog: v0.0.21...v0.0.22
v0.0.21
What's Changed
- Fix rpm build by @rhatdan in #350
- Add environment variables for checksums to ramalama container by @rhatdan in #355
- Change default container name for ROCm container image by @ericcurtin in #360
- Allow removal of models specified as shortnames by @rhatdan in #357
- Added a check to the zsh completions generation step by @ericcurtin in #356
- Add vulkan image and show size by @ericcurtin in #353
- Update ggerganov/whisper.cpp digest to 0fbaac9 by @renovate in #363
- Allow pushing of oci images by @rhatdan in #358
- Fix Makefile to be less stringent on failues of zsh by @smooge in #368
- Add support for --authfile and --tls-verify for login by @rhatdan in #364
- Fix incompatible Ollama paths by @swarajpande5 in #370
- Fix shortname paths by @swarajpande5 in #372
- Change to None instead of "" by @ericcurtin in #371
- Kompute build is warning it is missing this package by @ericcurtin in #366
- Add --debug option to show exec_cmd and run_cmd commands by @rhatdan in #373
- Add support for pushing a file into an OCI Model image by @rhatdan in #374
- Replace
huggingface-cli download
command with simple https client to pull models by @swarajpande5 in #375 - Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.4-1214.1729773476 by @renovate in #380
- Update ggerganov/whisper.cpp digest to c0ea41f by @renovate in #381
- Update ggerganov/whisper.cpp digest to fc49ee4 by @renovate in #382
- Update dependency huggingface/huggingface_hub to v0.26.2 by @renovate in #383
- Update dependency tqdm/tqdm to v4.66.6 - autoclosed by @renovate in #385
- Update ggerganov/whisper.cpp digest to 1626b73 by @renovate in #386
- Support listing and removing newly designed bundled images by @rhatdan in #378
- Fix default conman check by @rhatdan in #389
- Drop in config by @ericcurtin in #379
- Update ggerganov/whisper.cpp digest to 55e4221 by @renovate in #390
- Move run_container to model.py allowing models types to override by @rhatdan in #388
- Update ggerganov/whisper.cpp digest to 19dca2b by @renovate in #392
- Add man page information for ramalama.conf by @rhatdan in #391
- More debug info by @ericcurtin in #394
- Make transport use config by @rhatdan in #395
- Enable containers on macOS to use the GPU by @slp in #397
- chore(deps): update ggerganov/whisper.cpp digest to 4e10afb by @renovate in #398
- Time for removal of huggingface_hub dependancy by @ericcurtin in #400
- Mount model. car volumes into container by @rhatdan in #396
- Remove huggingface-hub references from spec file by @ericcurtin in #401
- Packit: disable osh diff scan by @lsm5 in #403
- Make minimal change to allow for ramalama to build on EL9 by @smooge in #404
- reduced the size of the nvidia containerfile by @bmahabirbu in #407
- Move /run/model to /mnt/models to match k8s model.car definiton by @rhatdan in #402
- Verify pyproject.py and setup.py have same version by @rhatdan in #405
- Make quadlets work with OCI images by @rhatdan in #406
Full Changelog: v0.0.20...v0.0.21
v0.0.20
What's Changed
- Add support for testing with docker by @rhatdan in #320
- chore(deps): update ggerganov/whisper.cpp digest to d3f7137 by @renovate in #321
- Make changes to spec file to better pass Fedora packaging guidelines by @smooge in #318
- Fix erroneous output in CUDA containerfile by @bmahabirbu in #322
- chore(deps): update ggerganov/whisper.cpp digest to a5abfe6 by @renovate in #323
- There's many cases where macOS support is broken by @ericcurtin in #325
- chore(deps): update dependency huggingface/huggingface_hub to v0.26.0 by @renovate in #328
- State Containerfile is available but not built and pushed by @ericcurtin in #329
- split build from validate in Makefile by @rhatdan in #326
- Remove duplicate GitHub Actions workflow runs in PRs by @p5 in #330
- Add hf:// as an alias to huggingface:// by @ericcurtin in #324
- Break up tests: build, bats, bats-nocontainer, docker, mac-nocontainer by @rhatdan in #331
- Update llama.cpp to fix granite3-moe models by @ericcurtin in #340
- Make sure we specify bash here by @ericcurtin in #337
- If installed in /usr/local , ramalama libs cannot be found by @ericcurtin in #333
- Kompute Containerfile by @ericcurtin in #334
- Add kubernetes.YAML support to ramalama serve by @rhatdan in #327
- Add AI Lab models to shortnames by @MichaelClifford in #345
- Build container images only on changes by @p5 in #332
- Make more spec changes to match RPM evaluation by @smooge in #347
- Free up space for docker tests. by @rhatdan in #343
- Allow the removal of one then one models via rm command by @rhatdan in #344
- Fix spelling mistakes in markdown by @rhatdan in #348
- Bump to v0.0.20 by @rhatdan in #349
New Contributors
- @smooge made their first contribution in #318
- @MichaelClifford made their first contribution in #345
Full Changelog: v0.0.19...v0.0.20
v0.0.19
v0.0.18
What's Changed
- Bump to v0.0.17 by @rhatdan in #298
- Add missing svg file for logos by @rhatdan in #299
- Update ggerganov/whisper.cpp digest to 06a1da9 by @renovate in #297
- Check for "krunkit" or "libkrun" strings by @ericcurtin in #303
- allow empty RAMALAMA_CONTAINER_ENGINE by @maxandersen in #307
- Update docker.io/nvidia/cuda Docker tag to v12.6.2 by @renovate in #310
- deps: bump omlmd to incorporate oras-py changes by @tarilabs in #311
- AMD GPU (rocm) support is enabled by @ericcurtin in #312
- No whitespace in container label by @ericcurtin in #306
- Changes to start trimming rocm image by @ericcurtin in #309
- Add --engine to allow overriding engine from command line by @rhatdan in #313
- chore(deps): update ggerganov/whisper.cpp digest to b604906 by @renovate in #314
- turn on --nocontainer tests by @rhatdan in #231
- Add zsh completions by @rhatdan in #315
New Contributors
- @maxandersen made their first contribution in #307
Full Changelog: v0.0.17...v0.0.18
v0.0.17
What's Changed
- Disable automatic detach for the serve command by @TristanCacqueray in #287
- Take into account pipx version of shortnames.conf by @ericcurtin in #292
- On Mac, run in container ... by @rhatdan in #291
- TMT: basic rpm installation check by @lsm5 in #276
- Remove requirements.txt, pyproject.toml handles it by @rhatdan in #290
- Enable resumable downloads in ollama.py by @swarajpande5 in #293
- Add rocm Containerfile by @ericcurtin in #240
- Add checksum verification in ollama.py by @swarajpande5 in #294
- Handles stdin input by @arouene in #295
- Show progress bar only for
pull_blob
in ollama.py by @swarajpande5 in #296
Full Changelog: v0.0.16...v0.0.17
v0.0.16
What's Changed
- Should print better error if using ps or stop with --nocontainer by @rhatdan in #268
- Add tdqm as a requirement by @rhatdan in #269
- docs: Added links to llama.cpp REST API documentation by @josiahbryan in #274
- Rpm should recommend podman be installed by @rhatdan in #272
- Created Nv docs by @bmahabirbu in #275
- Change directory to fix CI by @ericcurtin in #277
- Fix hack/man-page-checker in CI by @ericcurtin in #278
- chore(deps): update ggerganov/whisper.cpp digest to fdbfb46 by @renovate in #280
- Remove useless
cli run
parameter--prompt
by @arouene in #279 - Report error when AI Service is not available on host by @rhatdan in #281
- Add pipx paths to PATH environment variable by @rhatdan in #282
- Fix typo by @TristanCacqueray in #283
- Add entry to .codespelldict for path by @ericcurtin in #286
- Ramalama to RamaLama by @ericcurtin in #284
New Contributors
- @josiahbryan made their first contribution in #274
- @TristanCacqueray made their first contribution in #283
Full Changelog: v0.0.14...v0.0.16