Skip to content

Conversation

@rhatdan
Copy link
Member

@rhatdan rhatdan commented Apr 25, 2025

This allows users to experiment with content and get it into container image.

Fixes: #1274

Summary by Sourcery

Modify the container image build process to install ramalama from the local source directory instead of cloning the remote repository. Increment the project version to 0.8.0.

Build:

  • Install ramalama from a mounted local directory during container image builds instead of cloning the repository.
  • Mount the local ramalama source directory into the build container environment.
  • Update the RPM spec file version to 0.8.0.
  • Disable SELinux labeling for the mounted volume in container builds.

Deployment:

  • Update release scripts to tag images and manifests with the new version 0.8.0.

Chores:

  • Increment project version from 0.7.5 to 0.8.0.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Apr 25, 2025

Reviewer's Guide by Sourcery

This pull request modifies the container build process to use the local ramalama source directory by mounting it into the container build environment, rather than cloning the repository inside the container. This allows building container images with local code changes. The project version is also updated.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Modify container build process to use local source directory
  • Remove logic to clone and build ramalama inside container scripts
  • Add function to install ramalama from a specified path
  • Update container build scripts to call the new install function
  • Add logic to mount the local ramalama directory into the container build environment
  • Add security option for the volume mount
container-images/scripts/build-cli.sh
container-images/scripts/build_llama_and_whisper.sh
container_build.sh
Update project version from 0.7.5 to 0.8.0
  • Update version in project configuration files
  • Update version in RPM spec file
  • Update hardcoded version tags in release scripts
pyproject.toml
ramalama/version.py
rpm/python-ramalama.spec
scripts/release-image.sh
scripts/release.sh
setup.py

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @rhatdan - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Mounting the local directory makes builds less deterministic; consider if this approach is suitable for release builds.
  • Consider if disabling security labels with --security-opt=label=disable is essential or if specific volume labeling could be used instead.
  • The version number is repeated across several files; centralizing it could simplify future version bumps.
Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@ericcurtin
Copy link
Member

This is better, that git clone wasn't great before 😄

@benoitf benoitf changed the title Use currenct ramalama directory rather them main from repo Use current ramalama directory rather them main from repo Apr 25, 2025
@benoitf
Copy link
Contributor

benoitf commented Apr 25, 2025

I think something has changed in the way it is configuring the CLI as if I build the main branch CLI image I'm able to start containers but using that branch I have the error

serving on port 8086
os.execvp(llama-server, ['llama-server', '--port', '8086', '--model', '/Users/benoitf/.local/share/ramalama/store/store/ollama/smollm2/smollm2/blobs/sha256-5d3dfca5f3092cc44b1d2873ea27e916f4f67c82ea52d575a39e727f7f2326b6', '--alias', 'smollm2:135m', '--ctx-size', '2048', '--temp', '0.8', '--jinja', '--cache-reuse', '256', '-ngl', '999', '--host', '0.0.0.0'])
Error: RamaLama requires the "llama-server" command to be installed on the host when running with --nocontainer.
RamaLama is designed to run AI Models inside of containers, where "llama-server" is already installed.
Either install a package containing the "llama-server" command or run the workload inside of a container.
[Errno 2] No such file or directory

@benoitf
Copy link
Contributor

benoitf commented Apr 25, 2025

related to these 2 lines being removed

  # link podman-remote to podman for use by RamaLama
  ln -sf /usr/bin/podman-remote /usr/bin/podman

@benoitf
Copy link
Contributor

benoitf commented Apr 25, 2025

I don't know if it's also possible (not for this PR) but, can we have a way to not pass the --no-cache option using make
make build IMAGE=ramalama-cli (there is AFAIK no way to pass the -r flag to the ./build_container.sh script)

or optimize the development mode
if I modify one line of a RamaLama python .py file, I expect that rebuilding the container will just reinstall the file inside the container image, not pull or install again the tons of rpm packages that are the dependencies

When I run make build IMAGE=ramalama-cli, it fetches and install 142 packages then again 75 packages each time so if I always need a fast internet connection

This allows users to experiment with content and get it into
container image.

Fixes: containers#1274

Signed-off-by: Daniel J Walsh <[email protected]>
@rhatdan rhatdan merged commit aefb785 into containers:main Apr 26, 2025
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

build script to build the CLI image should allow to use the content of the current ramalama directory

3 participants