-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consolidate container images on Docker Hub and Quay.io #5436
Comments
I would use a similar approach other images are using, i.e. have the tags not only be the distribution, but a combination of version and distribution.
|
@vashirov I'm more than happy to standardise on one-or-the-other - Have I previously provided you admin access on the docker hub repos? We should probably have a better idea of how we want to manage docker going forward as it's important for both rh and suse :) |
Hey @Firstyear! I don't have access to the Docker Hub repos. It would be great, if you could add me there :) My username is the same. I also want to look again into using UBI or SLE BCI as the base images. Last time I checked, there were some missing dependencies, that's why there are I will make a new PoC repo (to avoid any disruption to the current repo users) with the new tagging/versioning scheme and share it here. |
Okay sent you the docker invite. :) SLE BCI is probably a good idea, we have BCI images for 389-ds (I think?) but I'm not 100% sure if they are public or not? I think the challenge with the SLE BCI images is that to "install" packages you need an active SUSE subscription (which I have through being an employee, but we don't have for dockerhub/community). So that would mean we'd need to build them in our internal build service (generally IBS compared to the opensuse build server aka OBS). So we could do them as BCI but we'd really just be "mirroring" these to hub.docker.com. Also not sure if that's allowed per content / license so I'd need to ask some questions internally. So I think a PoC repo is a good idea, as well as knowing what we want the "tags" to look like and what we'd want to push. For example do we just want a single set of images on hub.docker.com just with versions and patches? Or do we want to actually split fedora / rh-centos / sle images with their own versions too? |
Thanks for the invite, accepted. SLE BCI is freely available: https://registry.suse.com/bci/bci-base-15sp4/index.html and can be distributed: https://www.suse.com/products/base-container-images/FAQ/#can-i-freely-distribute-applications-built-on-bci? I was able to pull it and install 389-ds from the repos there without any subscription. But it has an older version of 389-ds:
|
We already have a BCI image you can re-publish though https://registry.suse.com/suse/389-ds/index.html I need to check what version of 389-ds / SLE that container is based on though to be sure. I can speak with the BCI maintainers to see if we can get the labeled/tagged separately for 2.0 vs other versions. The issue you are seeing with the libldap-data is because 15.4 is built with openldap-2.4, but network:ldap has 2.6. This means that libldap-data which isn't versioned is 2.6, but if anything in 15.4 links to libldap-2_4 it can't then access libldap-data that goes with it. To fix this we'd need to rename libldap-data in SLE but that's unlikely to happen due to the conservative package naming practices etc. So I think if we wanted to build specific versions for containers to go into BCI we either need to do that as SUSE and then we can mirror those to hub.docker.com since we can build that in our internal build service. Or we'll need to setup a dedicated repo in OBS to do the versioned builds. |
@vashirov Checking with some people inside SUSE we can't mirror BCI images on hub.docker.com. |
So this is the downstream image, it contains (old) version of 389DS that comes from SLE repos.
It's
Yeah, these kind of dependency issues I alluded to before. UBI also has a missing dependency for which I filed a bugzilla. If it won't be resolved, I'm leaning to use
I think there is some misunderstanding. I don't want to redistribute BCI images as they are, with the content from BCI repos. But rather use BCI/UBI as our base image, build 389DS in OBS/COPR to target the relevant chroots.
EULA: https://www.suse.com/licensing/eula/download/sbci/suse-base-container-image-licence-en.pdf Relevant part:
IANAL, but it looks like I can redistribute a container image that uses BCI as the base, with our own repos/content on top of it. Am I wrong? Anyways, this and the dependency issues make BCI and UBI images less viable options, unfortunately. |
I'll have to double check about this with product management to be sure, but your reading of this text seems correct. Yes, the dependency issue is annoying. I think that's just an artefact of network:ldap - if we wanted to do sle bci + multiple versions of 389-ds, then we'll need to take a different approach IMO to create these that avoids that OBS repo. |
@vashirov For now, what structure of tags should we aim for? I think once we know that then we can work out the specifics of images later. |
I talked to some folks internally and I was pointed to sclorg as the example that we might want to follow. Proposed structure:(I'm using semver MAJOR.MINOR.PATCH notation here)
That would expand to:
Let's take
So if a user wants to use a specific patch version, they should use Now to the building part. We have COPR repos that follow Fedora releases and not 389DS versions. I heard complaints from users that they don't want to upgrade their instances to the next minor version, especially when we don't guarantee backward compatibility between minor versions. So we should have a similar version structure:
And then we can use these repos to build our container images, including multiarch support. |
That proposal sounds really promising. This would allow helm charts to use the patch versions to allow rollbacks. Thanks for taking care. I really appreciate this! |
It's going to be a bit of a challenge for us with the git-commit hash when it comes to the bci images I think. We have an internal service that we use to build branches like 2.3 at any git patch level rather than specifically releasing at the minor point releases. So we likely could do something like
I think a question is what should we do for the dirsrv:latest image, since that will be the common and first contact point for a lot of people. I'm happy for it to stay on tumbleweed and tracking the "absolute latest" of the project, but that may not be what you want. |
I was thinking of setting up additional GH actions to submit builds to COPR (which I just found out also supports As for |
The problem IMO is that :latest is a default in docker - people can search say:
And they won't be shown the tags, they'll go to "docker pull 389ds/dirsrv" which implies :latest. So I think we should continue to have :latest and just decide what it should be. Even more important here is there are likely already people using that tag, so we should continue to provide updates to it. I think gh actions to build automatically is a good idea. It would save a lot of hassle. Would the gh actions be part of this repo or a seperate repos in 389ds org? |
I currently have GH Actions in a separate repo, it runs a nightly cron job. But to integrate it with repo events such as tagging and branching, it would be easier to have these actions in the main 389-ds-base repo. BTW, thanks for adding tags to Docker Hub repo! I'm working on other things this week, but I hope to get back to this ASAP. |
I think that just by convention and how docker works, it's a good idea to have a I think the gh actions being here would be great, sounds good to me :) |
Issue Description
We have images based on
openSUSE: https://hub.docker.com/r/389ds/dirsrv
Fedora/CentOS Stream: https://quay.io/repository/389ds/dirsrv
Since they have the same
namespace/image:tag
, users get different images depending on their tool of choice (docker vs. podman).I think we should provide same images in both registries with proper tags, i.e.
Not sure what should point to
latest
though.@Firstyear, thoughts?
The text was updated successfully, but these errors were encountered: