-
Notifications
You must be signed in to change notification settings - Fork 19
Upstream rebase #23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upstream rebase #23
Conversation
|
Skipping CI for Draft Pull Request. |
|
/test all |
|
/retest |
We currently have a large amount of copy/pasted code in our various in-tree provisioners. We have three bundle provisioners: plain, registry, and helm. And we have two BD provisioners: plain and helm. A very large majority of the code in each of these provisioners is identical, and the only differences are in: - how the bundle content is validated and converted before storing - how the bundle content is validated and rendered into a helm chart before being applied to the cluster. This PR introduces a few interfaces to capture these implementation-specific differences along with generic bundle and BD reconcilers that use these interfaces such that the copy/pasted code is now shared and DRY-ed up.
Signed-off-by: akihikokuroda <akuroda@us.ibm.com>
Signed-off-by: akihikokuroda <akuroda@us.ibm.com>
Signed-off-by: akihikokuroda <akuroda@us.ibm.com>
Signed-off-by: Tony Jin <kavinjsir@gmail.com>
Explicitly specify localhost for some containers, otherwise docker.io ends up being used as the default repository.
This required going to k8s 1.26 APIs, and subsequent changes. Fix various lint errors due to that update.
Undo go-git update
Problem: The Kubernetes project is moving its images from the k8s.gcr.io registry to the registry.k8s.io registry and our project relies on one of the images being moved. More information can be found here: kubernetes/k8s.io#4780 Solution: Update the referenced image to point to the same image hosted at the new registry. Signed-off-by: Alexander Greene <greene.al1991@gmail.com>
Adds debug make target to rukpak. The debug target is essentially the same as the run target except that, once finished, it exposes a local port to allow for remote debugging in the rukpak core container. Signed-off-by: dtfranz <dfranz@redhat.com>
Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
- new name is "configmaps" - updated unpacker to use a separate client/cache which is capable of watching all objects in the rukpak system namespace - added a webhook for bundles and configmaps to ensure expected invariants around bundle immutability are met.
…licate path detection Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
Updates our github actions to bring the in-line with operator-controller and silence deprecation warnings. Signed-off-by: dtfranz <dfranz@redhat.com>
…rated manifests Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
The link to plain bundle spec was incorrect in plain provisioner documentation.
|
/test all |
2 similar comments
|
/test all |
|
/test all |
|
/test verify |
|
/hold |
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dtfranz, ncdc The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/test verify |
|
@dtfranz @ncdc Hi, with the PR is merged, it just means the code is aligned with upstream, and does not mean the rukpak is in OCP load. Is it right? if it is right, we should other PR to let it in OCP load, so, could you please share me that PR? by the way, 2, three components go into OCP together (maybe one PR including all manifests of these three components). which kind is used to add them into OCP load? thanks. |
It would be in CI and nightly builds.
We are adding a new second level operator called cluster-olm-operator that manages the lifecycle of rukpak, catalogd, and operator-controller. They are installed as a group, and you can't pick which ones you want. |
|
/test verify |
|
/lgtm |
|
/hold |
@ncdc thanks. and which repo or PR is handling the cluster-olm-operator? thanks. by the way, just reminder again, there is existing TP feature platform operator which also has rukpak, so please try to avoid conflicts when adding cluster-olm-operator because it also include rukpak. thanks |
Yes
openshift/cluster-olm-operator#18 adds rukpak to cluster-olm-operator and adds cluster-olm-operator to the OCP release payload
openshift/platform-operators#86 removes rukpak from platform-operators |
…pace takes precedence if set Signed-off-by: Joe Lanford <joe.lanford@gmail.com> (cherry picked from commit 43ab9fe) Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
|
@dtfranz: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
/unhold |
|
thanks a lot! |
Pulling in all missing commits from the upstream repo.