Add PagerDutyIntegration CRD#90
Conversation
350efb0 to
cc03b66
Compare
|
PR check needs a fix: #97 |
cc03b66 to
0fa18ef
Compare
0fa18ef to
1ee1d5e
Compare
|
/cc @georgettica |
|
/assign @RiRa12621 |
|
/hold |
georgettica
left a comment
There was a problem hiding this comment.
I don't really know what you added me but I tried to give this a pass. maybe some of the stuff will help
pkg/controller/pagerdutyintegration/clusterdeployment_created.go
Outdated
Show resolved
Hide resolved
pkg/controller/pagerdutyintegration/pagerdutyintegration_controller.go
Outdated
Show resolved
Hide resolved
|
/hold cancel |
RiRa12621
left a comment
There was a problem hiding this comment.
Seems sane
see the comments + what @georgettica said
|
@RiRa12621 @georgettica Thanks a lot for the reviews! 🙂 I believe I've addressed all of the comments in one way or another, if you'd like to take a look again. |
|
/assign @jewzaam for approval |
|
@RiRa12621: GitHub didn't allow me to assign the following users: for, approval. Note that only openshift members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
@RiRa12621 @jewzaam Thanks for the sign-off! :D Note that someone/something will need to create a |
|
it's gonna go to int and stage anyways first, so I'd assume we notice all kinds of "problems" like this before it hits prod |
This is just the default generated types and CRD. A future commit will flesh them out with the properties that we need in this project.
Command used: operator-sdk add controller \ --kind=PagerDutyIntegration \ --api-version=pagerduty.openshift.io/v1alpha1
This doesn't yet watch everything, and it doesn't handle migration of existing configured clusters to the new way with the PagerDutyIntegration CRD.
Since a reconciliation request for this controller is the name and namespace of a PagerDutyIntegration instance, and there might be multiple instances tied to e.g. a ClusterDeployment instance on which an event occurs; we want to trigger a reconciliation request for each PagerDutyIntegration that may be affected.
Before this change there were several places where the names of SyncSet/ConfigMap/Secret resources were decided, and since those are more dynamic in this patch set, it makes sense to try to tidy it up a bit.
Since this patch set changes the way this operator works to reconcile a new custom resource, we need to be able to get existing ClusterDeployments with their PagerDuty services into this new format. Rather than adding some sort of additional field to the spec that would only ever be useful once for one CR, this looks for the following annotation on CRDs: "pd.openshift.io/legacy" with a non-empty value. This allows us to set this annotation on the OSD PagerDutyIntegration CR, so that the operator knows that this is the one to use for migration. Once it has migrated all of the resources, it will delete the annotation from the PagerDutyIntegration CR, so that it won't attempt to migrate on every reconciliation.
This takes the old tests from the ClusterDeployment controller that was here before, and makes them work with the PagerDutyIntegration controller. NOTE: Some of the tests have been commented out until some dependencies get updated, as the fake k8s client can't handle list options to a client.List call correctly.
This was a requested change from code review: https://github.com/openshift/pagerduty-operator/pull/90/files#r448159838
a2c5f09 to
20f2b76
Compare
|
This has been rebased now, as the changes to upgrade the operator-sdk version have been merged to master in #100 |
RiRa12621
left a comment
There was a problem hiding this comment.
/lgtm
/assign @jharrington22 for approval since @jewzaam is on PTO
|
@RiRa12621: GitHub didn't allow me to assign the following users: since, is, on, PTO, for, approval. Note that only openshift members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: grdryn, jewzaam, RiRa12621 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Revert "Merge pull request #90 from grdryn/PagerDutyIntegrationCRD"
This change adds a PagerDutyIntegration CRD & replaces the
ClusterDeployment and SyncSet controllers with a single
PagerDutyIntegration controller. This will allow more than one
PagerDuty service to be created per cluster, so that more than one SRE
team can receive alerts for different concerns (e.g. layered products
on top of OSD).
Watches
Since there's only one controller, and there's a many-to-many
relationship between PagerDutyIntegrations and ClusterDeployments,
here's how reconciliation will be triggered:
reconcile request for that PagerDutyIntegration.
request for each PagerDutyIntegration that matches that
ClusterDeployment (using the spec.clusterDeploymentSelector label
selector).
ClusterDeployment will trigger a reconcile request for each
PagerDutyIntegration that matches that ClusterDeployment.
Migration / upgrade logic
This change also includes temporary migration logic (which can
optionally be removed easily in a subsequent release) to decide based
on an annotation whether the CR should be used for migration of
resources from the pre-PagerDutyIntegration CRD way of working. This
pd.openshift.io/legacyannotation will inform the operator thatthere may be clusters that were configured with a pagerduty service
already, and that it should update the names of those resources to the
new format. Once all of the clusters have been migrated, it will
remove the annotation.