diff --git a/README.md b/README.md index 9ed8a152..41dbc2b7 100644 --- a/README.md +++ b/README.md @@ -5,14 +5,14 @@ The PagerDuty operator is used to automate integrating Openshift Dedicated clust This operator runs on [Hive](https://github.com/openshift/hive) and watches for new cluster deployments. Hive is an API driven OpenShift cluster providing OpenShift Dedicated provisioning and management. -## How the PagerDuty Opertor works +## How the PagerDuty Operator works -* PagerDuty's reconcile function watches for the `installed` field of the `ClusterDeployment` CRD and waits for the cluster to finish installation. It also sees if `api.openshift.com/noalerts` label is set on the `ClusterDeployment` of the new cluster being provisioned. - * The `api.openshift.com/noalerts` label is used to disable alerts from the provisioned cluster. This label is typically used on test clusters that do not require immediate attention as a result of critical issues or outages. Therefore, PagerDuty does not continue its actions if it finds this label in the new cluster's `ClusterDeployment`. -* Once the `installed` field becomes true, PagerDuty creates a secret which contains the integration key required to communicate with PagerDuty Web application. +* The PagerDutyIntegration controller watches for changes to PagerDutyIntegration CRs, and also for changes to appropriately labeled ClusterDeployment CRs (and ConfigMap/Secret/SyncSet resources owned by such a ClusterDeployment). +* For each PagerDutyIntegration CR, it will get a list of matching ClusterDeployments that have the `spec.installed` field set to true and don't have the `api.openshift.com/noalerts` label set. +* For each of these ClusterDeployments, PagerDuty creates a secret which contains the integration key required to communicate with PagerDuty Web application. * The PagerDuty operator then creates [syncset](https://github.com/openshift/hive/blob/master/config/crds/hive_v1_syncset.yaml) with the relevant information for hive to send the PagerDuty secret to the newly provisioned cluster . -* This syncset is used by hive to deploy the pagerduty secret to the provisioned cluster so that Openshift SRE can be alerted in case of issues on the cluster. -* Generally, the pagerduty secret is deployed under the `openshift-monitoring` namespace and named `pd-secret` on the new cluster. +* This syncset is used by hive to deploy the pagerduty secret to the provisioned cluster so that the relevant SRE team get notified of alerts on the cluster. +* The pagerduty secret is deployed to the coordinates specified in the `spec.targetSecretRef` field of the PagerDutyIntegration CR. ## Development @@ -36,20 +36,17 @@ $ oc apply -f manifests/01-namespace.yaml $ oc apply -f manifests/02-role.yaml $ oc apply -f manifests/03-service_account.yaml $ oc apply -f manifests/04-role_binding.yaml +$ oc apply -f deploy/crds/pagerduty_v1alpha1_pagerdutyintegration_crd.yaml ``` -Create secret with pagerduty api key, for example using a [trial account](https://www.pagerduty.com/free-trial/). You can then create an API key at https://.pagerduty.com/api_keys. Also, you need to create the ID of you escalation policy. You can get this by clicking on your policy at https://.pagerduty.com/escalation_policies#. The ID will afterwards be visible in the URL behind the `#` character. +Create secret with pagerduty api key, for example using a [trial account](https://www.pagerduty.com/free-trial/). You can then create an API key at https://.pagerduty.com/api_keys. Following is an example secret to adjust and apply with `oc apply -f `. ```yaml apiVersion: v1 data: - ACKNOWLEDGE_TIMEOUT: MjE2MDA= - ESCALATION_POLICY: MTIzNA== #echo -n | base64 PAGERDUTY_API_KEY: bXktYXBpLWtleQ== #echo -n | base64 - RESOLVE_TIMEOUT: MA== - SERVICE_PREFIX: b3Nk kind: Secret metadata: name: pagerduty-api-key @@ -70,7 +67,7 @@ Create namespace `pagerduty-operator`. $ oc create namespace pagerduty-operator ``` -Continue to `Create ClusterDeployment`. +Continue to `Create PagerDutyIntegration`. ### Option 2: Run local built operator in minishift @@ -116,26 +113,22 @@ Create a copy of `manifests/05-operator.yaml` and modify it use your image from Deploy modified operator manifest ```terminal -$ oc apply -f path/to/modified/operator.yaml +$ oc apply -f path/to/modified/operator.yaml ``` +### Create PagerDutyIntegration -### Create ClusterDeployment - -`pagerduty-operator` doesn't start reconciling clusters until `status.installed` is set to `true`. To be able to set this variable via `oc edit` without actually deploying a cluster to AWS, the ClusterDeployment CRD needs to be adjusted. +There's an example at +`deploy/examples/pagerduty_v1alpha1_pagerdutyintegration_cr.yaml` that +you can edit and apply to your cluster. -```terminal -$ oc edit crd clusterdeployments.hive.openshift.io -``` +You'll need to use a valid escalation policy ID from your PagerDuty account. You +can get this by clicking on your policy at +https://.pagerduty.com/escalation_policies#. The ID will be +visible in the URL after the `#` character. -Remove `subsesource` part: +### Create ClusterDeployment -``` -spec: - [...] - subresources: ## delete me - status: {} ## delete me -[...] -``` +`pagerduty-operator` doesn't start reconciling clusters until `spec.installed` is set to `true`. Create ClusterDeployment. @@ -144,7 +137,7 @@ $ oc create namespace fake-cluster-namespace $ oc apply -f hack/clusterdeployment/fake-clusterdeployment.yml ``` -If present, set `status.installed` to true. +If present, set `spec.installed` to true. ```terminal $ oc edit clusterdeployment fake-cluster -n fake-cluster-namespace diff --git a/config/config.go b/config/config.go index 4bb42a21..04eea110 100644 --- a/config/config.go +++ b/config/config.go @@ -21,9 +21,8 @@ const ( PagerDutyAPISecretName string = "pagerduty-api-key" PagerDutyAPISecretKey string = "PAGERDUTY_API_KEY" OperatorFinalizer string = "pd.managed.openshift.io/pagerduty" - SyncSetPostfix string = "-pd-sync" - PagerDutySecretName string = "pd-secret" - ConfigMapPostfix string = "-pd-config" + SecretSuffix string = "-pd-secret" + ConfigMapSuffix string = "-pd-config" // PagerDutyUrgencyRule is the type of IncidentUrgencyRule for new incidents // coming into the Service. This is for the creation of NEW SERVICES ONLY @@ -39,3 +38,10 @@ const ( // ClusterDeploymentNoalertsLabel is the label the clusterdeployment will have if the cluster should not send alerts ClusterDeploymentNoalertsLabel string = "api.openshift.com/noalerts" ) + +// Name is used to generate the name of secondary resources (SyncSets, +// Secrets, ConfigMaps) for a ClusterDeployment that are created by +// the PagerDutyIntegration controller. +func Name(servicePrefix, clusterDeploymentName, suffix string) string { + return servicePrefix + "-" + clusterDeploymentName + suffix +} diff --git a/deploy/crds/pagerduty.openshift.io_pagerdutyintegrations_crd.yaml b/deploy/crds/pagerduty.openshift.io_pagerdutyintegrations_crd.yaml new file mode 100644 index 00000000..f76301b2 --- /dev/null +++ b/deploy/crds/pagerduty.openshift.io_pagerdutyintegrations_crd.yaml @@ -0,0 +1,136 @@ +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + name: pagerdutyintegrations.pagerduty.openshift.io +spec: + group: pagerduty.openshift.io + names: + kind: PagerDutyIntegration + listKind: PagerDutyIntegrationList + plural: pagerdutyintegrations + singular: pagerdutyintegration + scope: Namespaced + subresources: + status: {} + validation: + openAPIV3Schema: + description: PagerDutyIntegration is the Schema for the pagerdutyintegrations + API + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: PagerDutyIntegrationSpec defines the desired state of PagerDutyIntegration + properties: + acknowledgeTimeout: + description: Time in seconds that an incident changes to the Triggered + State after being Acknowledged. Value must not be negative. Omitting + or setting this field to 0 will disable the feature. + minimum: 0 + type: integer + clusterDeploymentSelector: + description: A label selector used to find which clusterdeployment CRs + receive a PD integration based on this configuration. + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. + The requirements are ANDed. + items: + description: A label selector requirement is a selector that contains + values, a key, and an operator that relates the key and values. + properties: + key: + description: key is the label key that the selector applies + to. + type: string + operator: + description: operator represents a key's relationship to a + set of values. Valid operators are In, NotIn, Exists and + DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator + is In or NotIn, the values array must be non-empty. If the + operator is Exists or DoesNotExist, the values array must + be empty. This array is replaced during a strategic merge + patch. + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + description: matchLabels is a map of {key,value} pairs. A single + {key,value} in the matchLabels map is equivalent to an element + of matchExpressions, whose key field is "key", the operator is + "In", and the values array contains only "value". The requirements + are ANDed. + type: object + type: object + escalationPolicy: + description: ID of an existing Escalation Policy in PagerDuty. + type: string + pagerdutyApiKeySecretRef: + description: Reference to the secret containing PAGERDUTY_API_KEY. + properties: + name: + description: Name is unique within a namespace to reference a secret + resource. + type: string + namespace: + description: Namespace defines the space within which the secret + name must be unique. + type: string + type: object + resolveTimeout: + description: Time in seconds that an incident is automatically resolved + if left open for that long. Value must not be negative. Omitting or + setting this field to 0 will disable the feature. + minimum: 0 + type: integer + servicePrefix: + description: Prefix to set on the PagerDuty Service name. + type: string + targetSecretRef: + description: Name and namespace in the target cluster where the secret + is synced. + properties: + name: + description: Name is unique within a namespace to reference a secret + resource. + type: string + namespace: + description: Namespace defines the space within which the secret + name must be unique. + type: string + type: object + required: + - clusterDeploymentSelector + - escalationPolicy + - pagerdutyApiKeySecretRef + - servicePrefix + - targetSecretRef + type: object + status: + description: PagerDutyIntegrationStatus defines the observed state of PagerDutyIntegration + type: object + version: v1alpha1 + versions: + - name: v1alpha1 + served: true + storage: true diff --git a/deploy/examples/pagerduty_v1alpha1_pagerdutyintegration_cr.yaml b/deploy/examples/pagerduty_v1alpha1_pagerdutyintegration_cr.yaml new file mode 100644 index 00000000..1a509eb9 --- /dev/null +++ b/deploy/examples/pagerduty_v1alpha1_pagerdutyintegration_cr.yaml @@ -0,0 +1,18 @@ +apiVersion: pagerduty.openshift.io/v1alpha1 +kind: PagerDutyIntegration +metadata: + name: example-pagerdutyintegration +spec: + acknowledgeTimeout: 21600 + resolveTimeout: 0 + escalationPolicy: PA12345X + servicePrefix: test + pagerdutyApiKeySecretRef: + name: pagerduty-api-key + namespace: pagerduty-operator + clusterDeploymentSelector: + matchLabels: + api.openshift.com/test: "true" + targetSecretRef: + name: test-pd-secret + namespace: test-monitoring diff --git a/hack/generate.sh b/hack/generate.sh new file mode 100755 index 00000000..853277ab --- /dev/null +++ b/hack/generate.sh @@ -0,0 +1,12 @@ +#!/bin/bash + +# Commands need to be run from project root +cd "$( dirname "${BASH_SOURCE[0]}" )"/.. + +operator-sdk generate k8s +operator-sdk generate crds + +# This can be removed once the operator no longer needs to be run on +# OpenShift v3.11 +yq d -i deploy/crds/pagerduty.openshift.io_pagerdutyintegrations_crd.yaml \ + spec.validation.openAPIV3Schema.type diff --git a/manifests/02-role.yaml b/manifests/02-role.yaml index fd682fc6..effe3245 100644 --- a/manifests/02-role.yaml +++ b/manifests/02-role.yaml @@ -3,6 +3,17 @@ apiVersion: rbac.authorization.k8s.io/v1 metadata: name: pagerduty-operator rules: +- apiGroups: + - pagerduty.openshift.io + resources: + - pagerdutyintegrations + - pagerdutyintegrations/status + - pagerdutyintegrations/finalizers + verbs: + - get + - list + - watch + - update - apiGroups: - "" resources: @@ -63,4 +74,4 @@ rules: resources: - routes verbs: - - '*' \ No newline at end of file + - '*' diff --git a/pkg/apis/addtoscheme_pagerduty_v1alpha1.go b/pkg/apis/addtoscheme_pagerduty_v1alpha1.go new file mode 100644 index 00000000..4ca2ca05 --- /dev/null +++ b/pkg/apis/addtoscheme_pagerduty_v1alpha1.go @@ -0,0 +1,10 @@ +package apis + +import ( + "github.com/openshift/pagerduty-operator/pkg/apis/pagerduty/v1alpha1" +) + +func init() { + // Register the types with the Scheme so the components can map objects to GroupVersionKinds and back + AddToSchemes = append(AddToSchemes, v1alpha1.SchemeBuilder.AddToScheme) +} diff --git a/pkg/apis/pagerduty/v1alpha1/doc.go b/pkg/apis/pagerduty/v1alpha1/doc.go new file mode 100644 index 00000000..d9d3caa4 --- /dev/null +++ b/pkg/apis/pagerduty/v1alpha1/doc.go @@ -0,0 +1,4 @@ +// Package v1alpha1 contains API Schema definitions for the pagerduty v1alpha1 API group +// +k8s:deepcopy-gen=package,register +// +groupName=pagerduty.openshift.io +package v1alpha1 diff --git a/pkg/apis/pagerduty/v1alpha1/pagerdutyintegration_types.go b/pkg/apis/pagerduty/v1alpha1/pagerdutyintegration_types.go new file mode 100644 index 00000000..b04ae264 --- /dev/null +++ b/pkg/apis/pagerduty/v1alpha1/pagerdutyintegration_types.go @@ -0,0 +1,69 @@ +package v1alpha1 + +import ( + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// PagerDutyIntegrationSpec defines the desired state of PagerDutyIntegration +// +k8s:openapi-gen=true +type PagerDutyIntegrationSpec struct { + // Time in seconds that an incident changes to the Triggered State after + // being Acknowledged. Value must not be negative. Omitting or setting + // this field to 0 will disable the feature. + // +kubebuilder:validation:Minimum=0 + AcknowledgeTimeout uint `json:"acknowledgeTimeout,omitempty"` + + // ID of an existing Escalation Policy in PagerDuty. + EscalationPolicy string `json:"escalationPolicy"` + + // Time in seconds that an incident is automatically resolved if left + // open for that long. Value must not be negative. Omitting or setting + // this field to 0 will disable the feature. + // +kubebuilder:validation:Minimum=0 + ResolveTimeout uint `json:"resolveTimeout,omitempty"` + + // Prefix to set on the PagerDuty Service name. + ServicePrefix string `json:"servicePrefix"` + + // Reference to the secret containing PAGERDUTY_API_KEY. + PagerdutyApiKeySecretRef corev1.SecretReference `json:"pagerdutyApiKeySecretRef"` + + // A label selector used to find which clusterdeployment CRs receive a + // PD integration based on this configuration. + ClusterDeploymentSelector metav1.LabelSelector `json:"clusterDeploymentSelector"` + + // Name and namespace in the target cluster where the secret is synced. + TargetSecretRef corev1.SecretReference `json:"targetSecretRef"` +} + +// PagerDutyIntegrationStatus defines the observed state of PagerDutyIntegration +// +k8s:openapi-gen=true +type PagerDutyIntegrationStatus struct{} + +//go:generate ../../../../hack/generate.sh +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object + +// PagerDutyIntegration is the Schema for the pagerdutyintegrations API +// +k8s:openapi-gen=true +// +kubebuilder:subresource:status +type PagerDutyIntegration struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Spec PagerDutyIntegrationSpec `json:"spec,omitempty"` + Status PagerDutyIntegrationStatus `json:"status,omitempty"` +} + +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object + +// PagerDutyIntegrationList contains a list of PagerDutyIntegration +type PagerDutyIntegrationList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []PagerDutyIntegration `json:"items"` +} + +func init() { + SchemeBuilder.Register(&PagerDutyIntegration{}, &PagerDutyIntegrationList{}) +} diff --git a/pkg/apis/pagerduty/v1alpha1/register.go b/pkg/apis/pagerduty/v1alpha1/register.go new file mode 100644 index 00000000..db9a6e17 --- /dev/null +++ b/pkg/apis/pagerduty/v1alpha1/register.go @@ -0,0 +1,19 @@ +// NOTE: Boilerplate only. Ignore this file. + +// Package v1alpha1 contains API Schema definitions for the pagerduty v1alpha1 API group +// +k8s:deepcopy-gen=package,register +// +groupName=pagerduty.openshift.io +package v1alpha1 + +import ( + "k8s.io/apimachinery/pkg/runtime/schema" + "sigs.k8s.io/controller-runtime/pkg/runtime/scheme" +) + +var ( + // SchemeGroupVersion is group version used to register these objects + SchemeGroupVersion = schema.GroupVersion{Group: "pagerduty.openshift.io", Version: "v1alpha1"} + + // SchemeBuilder is used to add go types to the GroupVersionKind scheme + SchemeBuilder = &scheme.Builder{GroupVersion: SchemeGroupVersion} +) diff --git a/pkg/apis/pagerduty/v1alpha1/zz_generated.deepcopy.go b/pkg/apis/pagerduty/v1alpha1/zz_generated.deepcopy.go new file mode 100644 index 00000000..52c12351 --- /dev/null +++ b/pkg/apis/pagerduty/v1alpha1/zz_generated.deepcopy.go @@ -0,0 +1,105 @@ +// +build !ignore_autogenerated + +// Code generated by operator-sdk. DO NOT EDIT. + +package v1alpha1 + +import ( + runtime "k8s.io/apimachinery/pkg/runtime" +) + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PagerDutyIntegration) DeepCopyInto(out *PagerDutyIntegration) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + out.Status = in.Status + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PagerDutyIntegration. +func (in *PagerDutyIntegration) DeepCopy() *PagerDutyIntegration { + if in == nil { + return nil + } + out := new(PagerDutyIntegration) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *PagerDutyIntegration) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PagerDutyIntegrationList) DeepCopyInto(out *PagerDutyIntegrationList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]PagerDutyIntegration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PagerDutyIntegrationList. +func (in *PagerDutyIntegrationList) DeepCopy() *PagerDutyIntegrationList { + if in == nil { + return nil + } + out := new(PagerDutyIntegrationList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *PagerDutyIntegrationList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PagerDutyIntegrationSpec) DeepCopyInto(out *PagerDutyIntegrationSpec) { + *out = *in + out.PagerdutyApiKeySecretRef = in.PagerdutyApiKeySecretRef + in.ClusterDeploymentSelector.DeepCopyInto(&out.ClusterDeploymentSelector) + out.TargetSecretRef = in.TargetSecretRef + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PagerDutyIntegrationSpec. +func (in *PagerDutyIntegrationSpec) DeepCopy() *PagerDutyIntegrationSpec { + if in == nil { + return nil + } + out := new(PagerDutyIntegrationSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PagerDutyIntegrationStatus) DeepCopyInto(out *PagerDutyIntegrationStatus) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PagerDutyIntegrationStatus. +func (in *PagerDutyIntegrationStatus) DeepCopy() *PagerDutyIntegrationStatus { + if in == nil { + return nil + } + out := new(PagerDutyIntegrationStatus) + in.DeepCopyInto(out) + return out +} diff --git a/pkg/controller/add_clusterdeployment.go b/pkg/controller/add_clusterdeployment.go deleted file mode 100644 index 7c5f10a4..00000000 --- a/pkg/controller/add_clusterdeployment.go +++ /dev/null @@ -1,24 +0,0 @@ -// Copyright 2019 RedHat -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package controller - -import ( - "github.com/openshift/pagerduty-operator/pkg/controller/clusterdeployment" -) - -func init() { - // AddToManagerFuncs is a list of functions to create controllers and add them to a manager. - AddToManagerFuncs = append(AddToManagerFuncs, clusterdeployment.Add) -} diff --git a/pkg/controller/add_pagerdutyintegration.go b/pkg/controller/add_pagerdutyintegration.go new file mode 100644 index 00000000..d9b2d508 --- /dev/null +++ b/pkg/controller/add_pagerdutyintegration.go @@ -0,0 +1,10 @@ +package controller + +import ( + "github.com/openshift/pagerduty-operator/pkg/controller/pagerdutyintegration" +) + +func init() { + // AddToManagerFuncs is a list of functions to create controllers and add them to a manager. + AddToManagerFuncs = append(AddToManagerFuncs, pagerdutyintegration.Add) +} diff --git a/pkg/controller/add_syncset.go b/pkg/controller/add_syncset.go deleted file mode 100644 index 0401056f..00000000 --- a/pkg/controller/add_syncset.go +++ /dev/null @@ -1,24 +0,0 @@ -// Copyright 2019 RedHat -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package controller - -import ( - "github.com/openshift/pagerduty-operator/pkg/controller/syncset" -) - -func init() { - // AddToManagerFuncs is a list of functions to create controllers and add them to a manager. - AddToManagerFuncs = append(AddToManagerFuncs, syncset.Add) -} diff --git a/pkg/controller/clusterdeployment/clusterdeployment_controller.go b/pkg/controller/clusterdeployment/clusterdeployment_controller.go deleted file mode 100644 index 165042da..00000000 --- a/pkg/controller/clusterdeployment/clusterdeployment_controller.go +++ /dev/null @@ -1,145 +0,0 @@ -// Copyright 2019 RedHat -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package clusterdeployment - -import ( - "context" - "fmt" - - "github.com/go-logr/logr" - hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" - "github.com/openshift/pagerduty-operator/config" - "k8s.io/apimachinery/pkg/types" - - pd "github.com/openshift/pagerduty-operator/pkg/pagerduty" - "github.com/openshift/pagerduty-operator/pkg/utils" - corev1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/runtime" - "sigs.k8s.io/controller-runtime/pkg/client" - "sigs.k8s.io/controller-runtime/pkg/controller" - "sigs.k8s.io/controller-runtime/pkg/handler" - "sigs.k8s.io/controller-runtime/pkg/manager" - "sigs.k8s.io/controller-runtime/pkg/reconcile" - logf "sigs.k8s.io/controller-runtime/pkg/runtime/log" - "sigs.k8s.io/controller-runtime/pkg/source" -) - -var log = logf.Log.WithName("pagerduty_cd") - -// Add creates a new ClusterDeployment Controller and adds it to the Manager. The Manager will set fields on the Controller -// and Start it when the Manager is Started. -func Add(mgr manager.Manager) error { - newRec, err := newReconciler(mgr) - if err != nil { - return err - } - - return add(mgr, newRec) -} - -// newReconciler returns a new reconcile.Reconciler -func newReconciler(mgr manager.Manager) (reconcile.Reconciler, error) { - tempClient, err := client.New(mgr.GetConfig(), client.Options{Scheme: mgr.GetScheme()}) - if err != nil { - return nil, err - } - - // get PD API key from secret - pdAPIKey, err := utils.LoadSecretData(tempClient, config.PagerDutyAPISecretName, config.OperatorNamespace, config.PagerDutyAPISecretKey) - - return &ReconcileClusterDeployment{ - client: mgr.GetClient(), - scheme: mgr.GetScheme(), - pdclient: pd.NewClient(pdAPIKey), - }, nil -} - -// add adds a new Controller to mgr with r as the reconcile.Reconciler -func add(mgr manager.Manager, r reconcile.Reconciler) error { - // Create a new controller - c, err := controller.New("clusterdeployment-controller", mgr, controller.Options{Reconciler: r}) - if err != nil { - return err - } - - // Watch for changes to primary resource ClusterDeployment - err = c.Watch(&source.Kind{Type: &hivev1.ClusterDeployment{}}, &handler.EnqueueRequestForObject{}) - if err != nil { - return err - } - - err = c.Watch(&source.Kind{Type: &corev1.Secret{}}, &handler.EnqueueRequestForOwner{ - IsController: true, - OwnerType: &hivev1.ClusterDeployment{}, - }) - - return nil -} - -var _ reconcile.Reconciler = &ReconcileClusterDeployment{} - -// ReconcileClusterDeployment reconciles a ClusterDeployment object -type ReconcileClusterDeployment struct { - // This client, initialized using mgr.Client() above, is a split client - // that reads objects from the cache and writes to the apiserver - client client.Client - scheme *runtime.Scheme - reqLogger logr.Logger - pdclient pd.Client -} - -// Reconcile reads that state of the cluster for a ClusterDeployment object and makes changes based on the state read -// and what is in the ClusterDeployment.Spec -// TODO(user): Modify this Reconcile function to implement your Controller logic. This example creates -// a Pod as an example -// Note: -// The Controller will requeue the Request to be processed again if the returned error is non-nil or -// Result.Requeue is true, otherwise upon completion it will remove the work from the queue. -func (r *ReconcileClusterDeployment) Reconcile(request reconcile.Request) (reconcile.Result, error) { - r.reqLogger = log.WithValues("Request.Namespace", request.Namespace, "Request.Name", request.Name) - r.reqLogger.Info("Reconciling ClusterDeployment") - - processCD, instance, err := utils.CheckClusterDeployment(request, r.client, r.reqLogger) - - if err != nil { - // something went wrong, requeue - return reconcile.Result{}, err - } - - if !processCD || instance.DeletionTimestamp != nil { - return r.handleDelete(request, instance) - } - - ssName := fmt.Sprintf("%v%v", instance.Name, config.SyncSetPostfix) - ss := &hivev1.SyncSet{} - err = r.client.Get(context.TODO(), types.NamespacedName{Name: ssName, Namespace: request.Namespace}, ss) - - if err != nil { - if errors.IsNotFound(err) { - return r.handleCreate(request, instance) - } - } - - sc := &corev1.Secret{} - err = r.client.Get(context.TODO(), types.NamespacedName{Name: config.PagerDutySecretName, Namespace: request.Namespace}, sc) - - if err != nil { - if errors.IsNotFound(err) { - return r.handleCreate(request, instance) - } - } - return reconcile.Result{}, nil -} diff --git a/pkg/controller/clusterdeployment/clusterdeployment_deleted.go b/pkg/controller/clusterdeployment/clusterdeployment_deleted.go deleted file mode 100644 index a97f9f08..00000000 --- a/pkg/controller/clusterdeployment/clusterdeployment_deleted.go +++ /dev/null @@ -1,127 +0,0 @@ -// Copyright 2019 RedHat -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package clusterdeployment - -import ( - "context" - - hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" - "github.com/openshift/pagerduty-operator/config" - metrics "github.com/openshift/pagerduty-operator/pkg/localmetrics" - pd "github.com/openshift/pagerduty-operator/pkg/pagerduty" - "github.com/openshift/pagerduty-operator/pkg/utils" - "k8s.io/apimachinery/pkg/api/errors" - "sigs.k8s.io/controller-runtime/pkg/reconcile" -) - -func (r *ReconcileClusterDeployment) handleDelete(request reconcile.Request, instance *hivev1.ClusterDeployment) (reconcile.Result, error) { - if instance == nil { - // nothing to do, bail early - return reconcile.Result{}, nil - } - - if !utils.HasFinalizer(instance, config.OperatorFinalizer) { - return reconcile.Result{}, nil - } - - ClusterID := instance.Spec.ClusterName - - pdData := &pd.Data{ - ClusterID: instance.Spec.ClusterName, - BaseDomain: instance.Spec.BaseDomain, - } - err := pdData.ParsePDConfig(r.client) - deletePDService := true - - if err != nil { - if !errors.IsNotFound(err) { - // some error other than not found, requeue - return reconcile.Result{}, err - } - /* - The PD config was not found. - - If the error is a missing PD Config we must not fail or requeue. - If we are deleting (we're in handleDelete) and we cannot find the PD config - it will never be created. We cannot recover so just skip the PD service - deletion. - */ - deletePDService = false - } - - if deletePDService { - err = pdData.ParseClusterConfig(r.client, request.Namespace, request.Name) - - if err != nil { - if !errors.IsNotFound(err) { - // some error other than not found, requeue - return reconcile.Result{}, err - } - /* - Something was not found if we are here. - - The missing object will never be created as we're in the handleDelete function. - Skip service deletion in this case and continue with deletion. - */ - deletePDService = false - } - } - - if deletePDService { - // we have everything necessary to attempt deletion of the PD service - err = r.pdclient.DeleteService(pdData) - if err != nil { - r.reqLogger.Error(err, "Failed cleaning up pagerduty.") - } else { - // NOTE not deleting the configmap if we didn't delete the service with the assumption that the config can be used later for cleanup - // find the PD configmap and delete it - cmName := request.Name + config.ConfigMapPostfix - r.reqLogger.Info("Deleting PD ConfigMap", "Namespace", request.Namespace, "Name", cmName) - err = utils.DeleteConfigMap(cmName, request.Namespace, r.client, r.reqLogger) - - if err != nil { - r.reqLogger.Error(err, "Error deleting ConfigMap", "Namespace", request.Namespace, "Name", cmName) - } - } - } - // find the pd secret and delete id - r.reqLogger.Info("Deleting PD secret", "Namespace", request.Namespace, "Name", config.PagerDutySecretName) - err = utils.DeleteSecret(config.PagerDutySecretName, request.Namespace, r.client, r.reqLogger) - if err != nil { - r.reqLogger.Error(err, "Error deleting Secret", "Namespace", request.Namespace, "Name", config.PagerDutySecretName) - } - - // find the PD syncset and delete it - ssName := request.Name + config.SyncSetPostfix - r.reqLogger.Info("Deleting PD SyncSet", "Namespace", request.Namespace, "Name", ssName) - err = utils.DeleteSyncSet(ssName, request.Namespace, r.client, r.reqLogger) - - if err != nil { - r.reqLogger.Error(err, "Error deleting SyncSet", "Namespace", request.Namespace, "Name", ssName) - } - - if utils.HasFinalizer(instance, config.OperatorFinalizer) { - r.reqLogger.Info("Deleting PD finalizer from ClusterDeployment", "Namespace", request.Namespace, "Name", request.Name) - utils.DeleteFinalizer(instance, config.OperatorFinalizer) - err = r.client.Update(context.TODO(), instance) - if err != nil { - metrics.UpdateMetricPagerDutyDeleteFailure(1, ClusterID) - return reconcile.Result{}, err - } - } - metrics.UpdateMetricPagerDutyDeleteFailure(0, ClusterID) - - return reconcile.Result{}, nil -} diff --git a/pkg/controller/clusterdeployment/clusterdeployment_created.go b/pkg/controller/pagerdutyintegration/clusterdeployment_created.go similarity index 51% rename from pkg/controller/clusterdeployment/clusterdeployment_created.go rename to pkg/controller/pagerdutyintegration/clusterdeployment_created.go index dcf527f7..341612b4 100644 --- a/pkg/controller/clusterdeployment/clusterdeployment_created.go +++ b/pkg/controller/pagerdutyintegration/clusterdeployment_created.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package clusterdeployment +package pagerdutyintegration import ( "context" @@ -23,121 +23,158 @@ import ( hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" "github.com/openshift/pagerduty-operator/config" + pagerdutyv1alpha1 "github.com/openshift/pagerduty-operator/pkg/apis/pagerduty/v1alpha1" "github.com/openshift/pagerduty-operator/pkg/kube" "github.com/openshift/pagerduty-operator/pkg/localmetrics" pd "github.com/openshift/pagerduty-operator/pkg/pagerduty" "github.com/openshift/pagerduty-operator/pkg/utils" "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" - "sigs.k8s.io/controller-runtime/pkg/reconcile" ) -func (r *ReconcileClusterDeployment) handleCreate(request reconcile.Request, instance *hivev1.ClusterDeployment) (reconcile.Result, error) { - if !instance.Spec.Installed { +func (r *ReconcilePagerDutyIntegration) handleCreate(pdi *pagerdutyv1alpha1.PagerDutyIntegration, cd *hivev1.ClusterDeployment) error { + var ( + // secretName is the name of the Secret deployed to the target + // cluster, and also the name of the SyncSet that causes it to + // be deployed. + secretName string = config.Name(pdi.Spec.ServicePrefix, cd.Name, config.SecretSuffix) + + // configMapName is the name of the ConfigMap containing the + // SERVICE_ID and INTEGRATION_ID + configMapName string = config.Name(pdi.Spec.ServicePrefix, cd.Name, config.ConfigMapSuffix) + + // There can be more than one PagerDutyIntegration that causes + // creation of resources for a ClusterDeployment, and each one + // will need a finalizer here. We add a suffix of the CR + // name to distinguish them. + finalizer string = "pd.managed.openshift.io/" + pdi.Name + ) + + if !cd.Spec.Installed { // Cluster isn't installed yet, return - return reconcile.Result{}, nil + return nil } - if utils.HasFinalizer(instance, config.OperatorFinalizer) == false { - utils.AddFinalizer(instance, config.OperatorFinalizer) - err := r.client.Update(context.TODO(), instance) - if err != nil { - return reconcile.Result{}, err - } + if utils.HasFinalizer(cd, finalizer) == false { + utils.AddFinalizer(cd, finalizer) + return r.client.Update(context.TODO(), cd) + } + + ClusterID := cd.Spec.ClusterName + + pdAPISecret := &corev1.Secret{} + err := r.client.Get( + context.TODO(), + types.NamespacedName{ + Name: pdi.Spec.PagerdutyApiKeySecretRef.Name, + Namespace: pdi.Spec.PagerdutyApiKeySecretRef.Namespace, + }, + pdAPISecret, + ) + if err != nil { + return err } - ClusterID := instance.Spec.ClusterName + apiKey, err := pd.GetSecretKey(pdAPISecret.Data, config.PagerDutyAPISecretKey) + if err != nil { + return err + } pdData := &pd.Data{ - ClusterID: instance.Spec.ClusterName, - BaseDomain: instance.Spec.BaseDomain, + ClusterID: cd.Spec.ClusterName, + BaseDomain: cd.Spec.BaseDomain, + EscalationPolicyID: pdi.Spec.EscalationPolicy, + AutoResolveTimeout: pdi.Spec.ResolveTimeout, + AcknowledgeTimeOut: pdi.Spec.AcknowledgeTimeout, + ServicePrefix: pdi.Spec.ServicePrefix, + APIKey: apiKey, } - pdData.ParsePDConfig(r.client) + // To prevent scoping issues in the err check below. var pdIntegrationKey string - err := pdData.ParseClusterConfig(r.client, request.Namespace, request.Name) + err = pdData.ParseClusterConfig(r.client, cd.Namespace, configMapName) if err != nil { var createErr error _, createErr = r.pdclient.CreateService(pdData) if createErr != nil { localmetrics.UpdateMetricPagerDutyCreateFailure(1, ClusterID) - return reconcile.Result{}, createErr + return createErr } } localmetrics.UpdateMetricPagerDutyCreateFailure(0, ClusterID) pdIntegrationKey, err = r.pdclient.GetIntegrationKey(pdData) if err != nil { - return reconcile.Result{}, err + return err } //add secret part - secret := kube.GeneratePdSecret(instance.Namespace, config.PagerDutySecretName, pdIntegrationKey) + secret := kube.GeneratePdSecret(cd.Namespace, secretName, pdIntegrationKey) r.reqLogger.Info("creating pd secret") //add reference - if err = controllerutil.SetControllerReference(instance, secret, r.scheme); err != nil { + if err = controllerutil.SetControllerReference(cd, secret, r.scheme); err != nil { r.reqLogger.Error(err, "Error setting controller reference on secret") - return reconcile.Result{}, err + return err } if err = r.client.Create(context.TODO(), secret); err != nil { if !errors.IsAlreadyExists(err) { - return reconcile.Result{}, err + return err } - r.reqLogger.Info("the pd secret exist, check if pdIntegrationKey is changed or not") + r.reqLogger.Info("the pd secret exist, check if pdIntegrationKey is changed or not") sc := &corev1.Secret{} - err = r.client.Get(context.TODO(), types.NamespacedName{Name: secret.Name, Namespace: request.Namespace}, sc) + err = r.client.Get(context.TODO(), types.NamespacedName{Name: secret.Name, Namespace: cd.Namespace}, sc) if err != nil { - return reconcile.Result{}, nil + return nil } if string(sc.Data["PAGERDUTY_KEY"]) != pdIntegrationKey { r.reqLogger.Info("pdIntegrationKey is changed, delete the secret first") if err = r.client.Delete(context.TODO(), secret); err != nil { log.Info("failed to delete existing pd secret") - return reconcile.Result{}, err + return err } r.reqLogger.Info("creating pd secret") if err = r.client.Create(context.TODO(), secret); err != nil { - return reconcile.Result{}, err + return err } } } r.reqLogger.Info("Creating syncset") ss := &hivev1.SyncSet{} - err = r.client.Get(context.TODO(), types.NamespacedName{Name: request.Name + config.SyncSetPostfix, Namespace: instance.Namespace}, ss) + err = r.client.Get(context.TODO(), types.NamespacedName{Name: secretName, Namespace: cd.Namespace}, ss) if err != nil { r.reqLogger.Info("error finding the old syncset") if !errors.IsNotFound(err) { - return reconcile.Result{}, err + return err } r.reqLogger.Info("syncset not found , create a new one on this ") - ss = kube.GenerateSyncSet(request.Namespace, request.Name, secret) - if err = controllerutil.SetControllerReference(instance, ss, r.scheme); err != nil { + ss = kube.GenerateSyncSet(cd.Namespace, cd.Name, secret) + if err = controllerutil.SetControllerReference(cd, ss, r.scheme); err != nil { r.reqLogger.Error(err, "Error setting controller reference on syncset") - return reconcile.Result{}, err + return err } if err := r.client.Create(context.TODO(), ss); err != nil { - return reconcile.Result{}, err + return err } } r.reqLogger.Info("Creating configmap") - newCM := kube.GenerateConfigMap(request.Namespace, request.Name, pdData.ServiceID, pdData.IntegrationID) - if err = controllerutil.SetControllerReference(instance, newCM, r.scheme); err != nil { + newCM := kube.GenerateConfigMap(cd.Namespace, configMapName, pdData.ServiceID, pdData.IntegrationID) + if err = controllerutil.SetControllerReference(cd, newCM, r.scheme); err != nil { r.reqLogger.Error(err, "Error setting controller reference on configmap") - return reconcile.Result{}, err + return err } if err := r.client.Create(context.TODO(), newCM); err != nil { if errors.IsAlreadyExists(err) { if updateErr := r.client.Update(context.TODO(), newCM); updateErr != nil { - return reconcile.Result{}, err + return err } - return reconcile.Result{}, nil + return nil } - return reconcile.Result{}, err + return err } - return reconcile.Result{}, nil + return nil } diff --git a/pkg/controller/pagerdutyintegration/clusterdeployment_deleted.go b/pkg/controller/pagerdutyintegration/clusterdeployment_deleted.go new file mode 100644 index 00000000..f846748c --- /dev/null +++ b/pkg/controller/pagerdutyintegration/clusterdeployment_deleted.go @@ -0,0 +1,165 @@ +// Copyright 2019 RedHat +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pagerdutyintegration + +import ( + "context" + + hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" + "github.com/openshift/pagerduty-operator/config" + pagerdutyv1alpha1 "github.com/openshift/pagerduty-operator/pkg/apis/pagerduty/v1alpha1" + metrics "github.com/openshift/pagerduty-operator/pkg/localmetrics" + pd "github.com/openshift/pagerduty-operator/pkg/pagerduty" + "github.com/openshift/pagerduty-operator/pkg/utils" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/types" +) + +func (r *ReconcilePagerDutyIntegration) handleDelete(pdi *pagerdutyv1alpha1.PagerDutyIntegration, cd *hivev1.ClusterDeployment) error { + var ( + // secretName is the name of the Secret deployed to the target + // cluster, and also the name of the SyncSet that causes it to + // be deployed. + secretName string = config.Name(pdi.Spec.ServicePrefix, cd.Name, config.SecretSuffix) + + // configMapName is the name of the ConfigMap containing the + // SERVICE_ID and INTEGRATION_ID + configMapName string = config.Name(pdi.Spec.ServicePrefix, cd.Name, config.ConfigMapSuffix) + + // There can be more than one PagerDutyIntegration that causes + // creation of resources for a ClusterDeployment, and each one + // will need a finalizer here. We add a suffix of the CR + // name to distinguish them. + finalizer string = "pd.managed.openshift.io/" + pdi.Name + ) + + if cd == nil { + // nothing to do, bail early + return nil + } + + if !utils.HasFinalizer(cd, finalizer) { + return nil + } + + ClusterID := cd.Spec.ClusterName + + deletePDService := true + + pdAPISecret := &corev1.Secret{} + err := r.client.Get( + context.TODO(), + types.NamespacedName{ + Name: pdi.Spec.PagerdutyApiKeySecretRef.Name, + Namespace: pdi.Spec.PagerdutyApiKeySecretRef.Namespace, + }, + pdAPISecret, + ) + if err != nil { + if !errors.IsNotFound(err) { + // some error other than not found, requeue + return err + } + /* + The PD config was not found. + + If the error is a missing PD Config we must not fail or requeue. + If we are deleting (we're in handleDelete) and we cannot find the PD config + it will never be created. We cannot recover so just skip the PD service + deletion. + */ + deletePDService = false + } + + apiKey, err := pd.GetSecretKey(pdAPISecret.Data, config.PagerDutyAPISecretKey) + if err != nil { + return err + } + + pdData := &pd.Data{ + ClusterID: cd.Spec.ClusterName, + BaseDomain: cd.Spec.BaseDomain, + EscalationPolicyID: pdi.Spec.EscalationPolicy, + AutoResolveTimeout: pdi.Spec.ResolveTimeout, + AcknowledgeTimeOut: pdi.Spec.AcknowledgeTimeout, + ServicePrefix: pdi.Spec.ServicePrefix, + APIKey: apiKey, + } + + if deletePDService { + err = pdData.ParseClusterConfig(r.client, cd.Namespace, configMapName) + + if err != nil { + if !errors.IsNotFound(err) { + // some error other than not found, requeue + return err + } + /* + Something was not found if we are here. + + The missing object will never be created as we're in the handleDelete function. + Skip service deletion in this case and continue with deletion. + */ + deletePDService = false + } + } + + if deletePDService { + // we have everything necessary to attempt deletion of the PD service + err = r.pdclient.DeleteService(pdData) + if err != nil { + r.reqLogger.Error(err, "Failed cleaning up pagerduty.") + } else { + // NOTE: not deleting the configmap if we didn't delete + // the service with the assumption that the config can + // be used later for cleanup find the PD configmap and + // delete it + r.reqLogger.Info("Deleting PD ConfigMap", "Namespace", cd.Namespace, "Name", configMapName) + err = utils.DeleteConfigMap(configMapName, cd.Namespace, r.client, r.reqLogger) + + if err != nil { + r.reqLogger.Error(err, "Error deleting ConfigMap", "Namespace", cd.Namespace, "Name", configMapName) + } + } + } + // find the pd secret and delete id + r.reqLogger.Info("Deleting PD secret", "Namespace", cd.Namespace, "Name", secretName) + err = utils.DeleteSecret(secretName, cd.Namespace, r.client, r.reqLogger) + if err != nil { + r.reqLogger.Error(err, "Error deleting Secret", "Namespace", cd.Namespace, "Name", secretName) + } + + // find the PD syncset and delete it + r.reqLogger.Info("Deleting PD SyncSet", "Namespace", cd.Namespace, "Name", secretName) + err = utils.DeleteSyncSet(secretName, cd.Namespace, r.client, r.reqLogger) + + if err != nil { + r.reqLogger.Error(err, "Error deleting SyncSet", "Namespace", cd.Namespace, "Name", secretName) + } + + if utils.HasFinalizer(cd, finalizer) { + r.reqLogger.Info("Deleting PD finalizer from ClusterDeployment", "Namespace", cd.Namespace, "Name", cd.Name) + utils.DeleteFinalizer(cd, finalizer) + err = r.client.Update(context.TODO(), cd) + if err != nil { + metrics.UpdateMetricPagerDutyDeleteFailure(1, ClusterID) + return err + } + } + metrics.UpdateMetricPagerDutyDeleteFailure(0, ClusterID) + + return nil +} diff --git a/pkg/controller/pagerdutyintegration/clusterdeployment_migrate.go b/pkg/controller/pagerdutyintegration/clusterdeployment_migrate.go new file mode 100644 index 00000000..1b19da0c --- /dev/null +++ b/pkg/controller/pagerdutyintegration/clusterdeployment_migrate.go @@ -0,0 +1,160 @@ +// Copyright 2020 Red Hat +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +package pagerdutyintegration + +import ( + "context" + "fmt" + + hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" + "github.com/openshift/pagerduty-operator/config" + pagerdutyv1alpha1 "github.com/openshift/pagerduty-operator/pkg/apis/pagerduty/v1alpha1" + "github.com/openshift/pagerduty-operator/pkg/kube" + "github.com/openshift/pagerduty-operator/pkg/utils" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" +) + +func (r *ReconcilePagerDutyIntegration) handleMigrate(pdi *pagerdutyv1alpha1.PagerDutyIntegration, cd *hivev1.ClusterDeployment) error { + var ( + // secretName is the name of the Secret deployed to the target + // cluster, and also the name of the SyncSet that causes it to + // be deployed. + secretName string = config.Name(pdi.Spec.ServicePrefix, cd.Name, config.SecretSuffix) + + // configMapName is the name of the ConfigMap containing the + // SERVICE_ID and INTEGRATION_ID + configMapName string = config.Name(pdi.Spec.ServicePrefix, cd.Name, config.ConfigMapSuffix) + + // There can be more than one PagerDutyIntegration that causes + // creation of resources for a ClusterDeployment, and each one + // will need a finalizer here. We add a suffix of the CR + // name to distinguish them. + finalizer string = "pd.managed.openshift.io/" + pdi.Name + ) + + if !cd.Spec.Installed { + return nil + } + + // Step 1: Get old ConfigMap, Secret, SyncSet (or ignore migration if can't Get) + + oldCM := &corev1.ConfigMap{} + err := r.client.Get(context.TODO(), types.NamespacedName{Name: cd.Name + config.ConfigMapSuffix, Namespace: cd.Namespace}, oldCM) + if err != nil { + r.reqLogger.Error( + err, "Couldn't get legacy ConfigMap, assuming no migration to be done", + "ClusterDeployment.Name", cd.Name, "ClusterDeployment.Namespace", cd.Namespace, + "PagerDutyIntegration.Name", pdi.Name, "PagerDutyIntegration.Namespace", pdi.Namespace, + ) + return nil + } + + oldSecret := &corev1.Secret{} + err = r.client.Get(context.TODO(), types.NamespacedName{Name: "pd-secret", Namespace: cd.Namespace}, oldSecret) + if err != nil { + r.reqLogger.Error( + err, "Couldn't get legacy Secret, assuming no migration to be done", + "ClusterDeployment.Name", cd.Name, "ClusterDeployment.Namespace", cd.Namespace, + "PagerDutyIntegration.Name", pdi.Name, "PagerDutyIntegration.Namespace", pdi.Namespace, + ) + return nil + } + + oldSyncSet := &hivev1.SyncSet{} + err = r.client.Get(context.TODO(), types.NamespacedName{Name: cd.Name + "-pd-sync", Namespace: cd.Namespace}, oldSyncSet) + if err != nil { + r.reqLogger.Error( + err, "Couldn't get legacy SyncSet, assuming no migration to be done", + "ClusterDeployment.Name", cd.Name, "ClusterDeployment.Namespace", cd.Namespace, + "PagerDutyIntegration.Name", pdi.Name, "PagerDutyIntegration.Namespace", pdi.Namespace, + ) + return nil + } + + // Step 2: Duplicate ConfigMap, Secret, SyncSet into new names + + newCM := kube.GenerateConfigMap(cd.Namespace, configMapName, oldCM.Data["SERVICE_ID"], oldCM.Data["INTEGRATION_ID"]) + if err = controllerutil.SetControllerReference(cd, newCM, r.scheme); err != nil { + return fmt.Errorf("Couldn't set controller reference on ConfigMap: %w", err) + } + if err := r.client.Create(context.TODO(), newCM); err != nil && !errors.IsAlreadyExists(err) { + return fmt.Errorf("Couldn't create new ConfigMap: %w", err) + } + + newSecret := &corev1.Secret{ + Type: "Opaque", + ObjectMeta: metav1.ObjectMeta{ + Name: secretName, + Namespace: cd.Namespace, + }, + Data: map[string][]byte{ + "PAGERDUTY_KEY": oldSecret.Data["PAGERDUTY_KEY"], + }, + } + if err = controllerutil.SetControllerReference(cd, newSecret, r.scheme); err != nil { + return fmt.Errorf("Couldn't set controller reference on Secret: %w", err) + } + if err = r.client.Create(context.TODO(), newSecret); err != nil && !errors.IsAlreadyExists(err) { + return fmt.Errorf("Couldn't create new Secret: %w", err) + } + + newSyncSet := kube.GenerateSyncSet(cd.Namespace, cd.Name, newSecret) + if err = controllerutil.SetControllerReference(cd, newSyncSet, r.scheme); err != nil { + return fmt.Errorf("Couldn't set controller reference on SyncSet: %w", err) + } + if err = r.client.Create(context.TODO(), newSyncSet); err != nil { + return fmt.Errorf("Couldn't create new SyncSet: %w", err) + } + + // Step 3: Delete old ConfigMap, Secret, SyncSet + + err = r.client.Delete(context.TODO(), oldCM) + if err != nil { + return fmt.Errorf("Couldn't delete legacy ConfigMap: %w", err) + } + + err = r.client.Delete(context.TODO(), oldSyncSet) + if err != nil { + return fmt.Errorf("Couldn't delete legacy SyncSet: %w", err) + } + + err = r.client.Delete(context.TODO(), oldSecret) + if err != nil { + return fmt.Errorf("Couldn't delete legacy Secret: %w", err) + } + + // Step 4: Update finalizers on ClusterDeployment + + if utils.HasFinalizer(cd, finalizer) == false { + utils.AddFinalizer(cd, finalizer) + err := r.client.Update(context.TODO(), cd) + if err != nil { + return err + } + } + + if utils.HasFinalizer(cd, config.OperatorFinalizer) { + utils.DeleteFinalizer(cd, config.OperatorFinalizer) + err := r.client.Update(context.TODO(), cd) + if err != nil { + return err + } + } + + return nil +} diff --git a/pkg/controller/pagerdutyintegration/mappers.go b/pkg/controller/pagerdutyintegration/mappers.go new file mode 100644 index 00000000..b86d955f --- /dev/null +++ b/pkg/controller/pagerdutyintegration/mappers.go @@ -0,0 +1,105 @@ +// Copyright 2020 Red Hat +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pagerdutyintegration + +import ( + "context" + "strings" + + hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" + pagerdutyv1alpha1 "github.com/openshift/pagerduty-operator/pkg/apis/pagerduty/v1alpha1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/types" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/reconcile" +) + +type clusterDeploymentToPagerDutyIntegrationsMapper struct { + Client client.Client +} + +func (m clusterDeploymentToPagerDutyIntegrationsMapper) Map(mo handler.MapObject) []reconcile.Request { + pdiList := &pagerdutyv1alpha1.PagerDutyIntegrationList{} + err := m.Client.List(context.TODO(), pdiList, &client.ListOptions{}) + if err != nil { + return []reconcile.Request{} + } + + requests := []reconcile.Request{} + for _, pdi := range pdiList.Items { + selector, err := metav1.LabelSelectorAsSelector(&pdi.Spec.ClusterDeploymentSelector) + if err != nil { + continue + } + if selector.Matches(labels.Set(mo.Meta.GetLabels())) { + requests = append(requests, reconcile.Request{ + NamespacedName: types.NamespacedName{ + Name: pdi.Name, + Namespace: pdi.Namespace, + }}, + ) + } + } + return requests +} + +type ownedByClusterDeploymentToPagerDutyIntegrationsMapper struct { + Client client.Client +} + +func (m ownedByClusterDeploymentToPagerDutyIntegrationsMapper) Map(mo handler.MapObject) []reconcile.Request { + relevantClusterDeployments := []*hivev1.ClusterDeployment{} + for _, or := range mo.Meta.GetOwnerReferences() { + if or.APIVersion == hivev1.SchemeGroupVersion.String() && strings.ToLower(or.Kind) == "clusterdeployment" { + cd := &hivev1.ClusterDeployment{} + err := m.Client.Get(context.TODO(), client.ObjectKey{Name: or.Name, Namespace: mo.Meta.GetNamespace()}, cd) + if err != nil { + continue + } + relevantClusterDeployments = append(relevantClusterDeployments, cd) + } + } + if len(relevantClusterDeployments) == 0 { + return []reconcile.Request{} + } + + pdiList := &pagerdutyv1alpha1.PagerDutyIntegrationList{} + err := m.Client.List(context.TODO(), pdiList, &client.ListOptions{}) + if err != nil { + return []reconcile.Request{} + } + + requests := []reconcile.Request{} + for _, pdi := range pdiList.Items { + selector, err := metav1.LabelSelectorAsSelector(&pdi.Spec.ClusterDeploymentSelector) + if err != nil { + continue + } + + for _, cd := range relevantClusterDeployments { + if selector.Matches(labels.Set(cd.ObjectMeta.GetLabels())) { + requests = append(requests, reconcile.Request{ + NamespacedName: types.NamespacedName{ + Name: pdi.Name, + Namespace: pdi.Namespace, + }}, + ) + } + } + } + return requests +} diff --git a/pkg/controller/pagerdutyintegration/mappers_test.go b/pkg/controller/pagerdutyintegration/mappers_test.go new file mode 100644 index 00000000..41f4541b --- /dev/null +++ b/pkg/controller/pagerdutyintegration/mappers_test.go @@ -0,0 +1,198 @@ +// Copyright 2020 Red Hat +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pagerdutyintegration + +import ( + "testing" + + hiveapis "github.com/openshift/hive/pkg/apis" + hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" + pagerdutyapis "github.com/openshift/pagerduty-operator/pkg/apis" + pagerdutyv1alpha1 "github.com/openshift/pagerduty-operator/pkg/apis/pagerduty/v1alpha1" + "github.com/stretchr/testify/assert" + v1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/kubernetes/scheme" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/reconcile" +) + +func TestClusterDeploymentToPagerDutyIntegrationsMapper(t *testing.T) { + pagerdutyapis.AddToScheme(scheme.Scheme) + hiveapis.AddToScheme(scheme.Scheme) + + tests := []struct { + name string + mapper func(client client.Client) handler.Mapper + objects []runtime.Object + mapObject handler.MapObject + expectedRequests []reconcile.Request + }{ + { + name: "clusterDeploymentToPagerDutyIntegrations: empty", + mapper: clusterDeploymentToPagerDutyIntegrations, + objects: []runtime.Object{}, + mapObject: handler.MapObject{}, + expectedRequests: []reconcile.Request{}, + }, + { + name: "clusterDeploymentToPagerDutyIntegrations: two matching PagerDutyIntegrations, one not matching", + mapper: clusterDeploymentToPagerDutyIntegrations, + objects: []runtime.Object{ + pagerDutyIntegration("test1", map[string]string{"test": "test"}), + pagerDutyIntegration("test2", map[string]string{"test": "test"}), + pagerDutyIntegration("test3", map[string]string{"notmatching": "test"}), + }, + mapObject: handler.MapObject{ + Meta: &metav1.ObjectMeta{ + Labels: map[string]string{"test": "test"}, + }, + }, + expectedRequests: []reconcile.Request{ + { + NamespacedName: types.NamespacedName{ + Name: "test1", + Namespace: "test", + }, + }, + { + NamespacedName: types.NamespacedName{ + Name: "test2", + Namespace: "test", + }, + }, + }, + }, + + { + name: "ownedByClusterDeploymentToPagerDutyIntegrations: empty", + mapper: ownedByClusterDeploymentToPagerDutyIntegrations, + objects: []runtime.Object{}, + mapObject: handler.MapObject{ + Meta: &metav1.ObjectMeta{ + OwnerReferences: []metav1.OwnerReference{{ + APIVersion: hivev1.SchemeGroupVersion.String(), + Kind: "ClusterDeployment", + Name: "test", + UID: types.UID("test"), + }}, + }, + }, + expectedRequests: []reconcile.Request{}, + }, + { + name: "ownedByClusterDeploymentToPagerDutyIntegrations: matched by 3 PagerDutyIntegrations", + mapper: ownedByClusterDeploymentToPagerDutyIntegrations, + objects: []runtime.Object{ + pagerDutyIntegration("test1", map[string]string{"test": "test"}), + pagerDutyIntegration("test2", map[string]string{"test": "test"}), + pagerDutyIntegration("test3", map[string]string{"test": "test"}), + pagerDutyIntegration("test4", map[string]string{"notmatching": "test"}), + clusterDeployment("cd1", map[string]string{"test": "test"}), + }, + mapObject: handler.MapObject{ + Meta: &metav1.ObjectMeta{ + OwnerReferences: []metav1.OwnerReference{{ + APIVersion: hivev1.SchemeGroupVersion.String(), + Kind: "ClusterDeployment", + Name: "cd1", + UID: types.UID("test"), + }}, + }, + }, + expectedRequests: []reconcile.Request{ + { + NamespacedName: types.NamespacedName{ + Name: "test1", + Namespace: "test", + }, + }, + { + NamespacedName: types.NamespacedName{ + Name: "test2", + Namespace: "test", + }, + }, + { + NamespacedName: types.NamespacedName{ + Name: "test3", + Namespace: "test", + }, + }, + }, + }, + } + + for _, test := range tests { + t.Run(test.name, func(t *testing.T) { + client := fake.NewFakeClient(test.objects...) + mapper := test.mapper(client) + + actualRequests := mapper.Map(test.mapObject) + + assert.Equal(t, test.expectedRequests, actualRequests) + }) + } +} + +func clusterDeploymentToPagerDutyIntegrations(client client.Client) handler.Mapper { + return clusterDeploymentToPagerDutyIntegrationsMapper{Client: client} +} + +func ownedByClusterDeploymentToPagerDutyIntegrations(client client.Client) handler.Mapper { + return ownedByClusterDeploymentToPagerDutyIntegrationsMapper{Client: client} +} + +func pagerDutyIntegration(name string, labels map[string]string) *pagerdutyv1alpha1.PagerDutyIntegration { + return &pagerdutyv1alpha1.PagerDutyIntegration{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + Namespace: "test", + }, + Spec: pagerdutyv1alpha1.PagerDutyIntegrationSpec{ + EscalationPolicy: "ABC123", + ClusterDeploymentSelector: metav1.LabelSelector{ + MatchLabels: labels, + }, + ServicePrefix: "test", + PagerdutyApiKeySecretRef: v1.SecretReference{ + Name: "test", + Namespace: "test", + }, + TargetSecretRef: v1.SecretReference{ + Name: "test", + Namespace: "test", + }, + }, + } +} + +func clusterDeployment(name string, labels map[string]string) *hivev1.ClusterDeployment { + return &hivev1.ClusterDeployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + Namespace: "test", + Labels: labels, + }, + Spec: hivev1.ClusterDeploymentSpec{ + ClusterName: name, + Installed: true, + }, + } +} diff --git a/pkg/controller/pagerdutyintegration/pagerdutyintegration_controller.go b/pkg/controller/pagerdutyintegration/pagerdutyintegration_controller.go new file mode 100644 index 00000000..0f7eee07 --- /dev/null +++ b/pkg/controller/pagerdutyintegration/pagerdutyintegration_controller.go @@ -0,0 +1,273 @@ +// Copyright 2020 Red Hat +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pagerdutyintegration + +import ( + "context" + + "github.com/go-logr/logr" + hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" + "github.com/openshift/pagerduty-operator/config" + pagerdutyv1alpha1 "github.com/openshift/pagerduty-operator/pkg/apis/pagerduty/v1alpha1" + pd "github.com/openshift/pagerduty-operator/pkg/pagerduty" + "github.com/openshift/pagerduty-operator/pkg/utils" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + logf "sigs.k8s.io/controller-runtime/pkg/runtime/log" + "sigs.k8s.io/controller-runtime/pkg/source" +) + +var log = logf.Log.WithName("controller_pagerdutyintegration") + +/** +* USER ACTION REQUIRED: This is a scaffold file intended for the user to modify with their own Controller +* business logic. Delete these comments after modifying this file.* + */ + +// Add creates a new PagerDutyIntegration Controller and adds it to the Manager. The Manager will set fields on the Controller +// and Start it when the Manager is Started. +func Add(mgr manager.Manager) error { + newRec, err := newReconciler(mgr) + if err != nil { + return err + } + + return add(mgr, newRec) +} + +// newReconciler returns a new reconcile.Reconciler +func newReconciler(mgr manager.Manager) (reconcile.Reconciler, error) { + tempClient, err := client.New(mgr.GetConfig(), client.Options{Scheme: mgr.GetScheme()}) + if err != nil { + return nil, err + } + + // get PD API key from secret + pdAPIKey, err := utils.LoadSecretData(tempClient, config.PagerDutyAPISecretName, config.OperatorNamespace, config.PagerDutyAPISecretKey) + + return &ReconcilePagerDutyIntegration{ + client: mgr.GetClient(), + scheme: mgr.GetScheme(), + pdclient: pd.NewClient(pdAPIKey), + }, nil +} + +// add adds a new Controller to mgr with r as the reconcile.Reconciler +func add(mgr manager.Manager, r reconcile.Reconciler) error { + // Create a new controller + c, err := controller.New("pagerdutyintegration-controller", mgr, controller.Options{Reconciler: r}) + if err != nil { + return err + } + + // Watch for changes to primary resource PagerDutyIntegration + err = c.Watch(&source.Kind{Type: &pagerdutyv1alpha1.PagerDutyIntegration{}}, &handler.EnqueueRequestForObject{}) + if err != nil { + return err + } + + // Watch for changes to ClusterDeployments, and queue a request for all + // PagerDutyIntegration CR that selects it. + err = c.Watch(&source.Kind{Type: &hivev1.ClusterDeployment{}}, + &handler.EnqueueRequestsFromMapFunc{ + ToRequests: clusterDeploymentToPagerDutyIntegrationsMapper{ + Client: mgr.GetClient(), + }, + }, + ) + if err != nil { + return err + } + + // Watch for changes to SyncSets. If one has any ClusterDeployment owner + // references, queue a request for all PagerDutyIntegration CR that + // select those ClusterDeployments. + err = c.Watch(&source.Kind{Type: &hivev1.SyncSet{}}, + &handler.EnqueueRequestsFromMapFunc{ + ToRequests: ownedByClusterDeploymentToPagerDutyIntegrationsMapper{ + Client: mgr.GetClient(), + }, + }, + ) + if err != nil { + return err + } + + // Watch for changes to Secrets. If one has any ClusterDeployment owner + // references, queue a request for all PagerDutyIntegration CR that + // select those ClusterDeployments. + err = c.Watch(&source.Kind{Type: &corev1.Secret{}}, + &handler.EnqueueRequestsFromMapFunc{ + ToRequests: ownedByClusterDeploymentToPagerDutyIntegrationsMapper{ + Client: mgr.GetClient(), + }, + }, + ) + if err != nil { + return err + } + + // Watch for changes to ConfigMaps. If one has any ClusterDeployment + // owner references, queue a request for all PagerDutyIntegration CR + // that select those ClusterDeployments. + err = c.Watch(&source.Kind{Type: &corev1.ConfigMap{}}, + &handler.EnqueueRequestsFromMapFunc{ + ToRequests: ownedByClusterDeploymentToPagerDutyIntegrationsMapper{ + Client: mgr.GetClient(), + }, + }, + ) + if err != nil { + return err + } + + return nil +} + +// blank assignment to verify that ReconcilePagerDutyIntegration implements reconcile.Reconciler +var _ reconcile.Reconciler = &ReconcilePagerDutyIntegration{} + +// ReconcilePagerDutyIntegration reconciles a PagerDutyIntegration object +type ReconcilePagerDutyIntegration struct { + // This client, initialized using mgr.Client() above, is a split client + // that reads objects from the cache and writes to the apiserver + client client.Client + scheme *runtime.Scheme + reqLogger logr.Logger + pdclient pd.Client +} + +// Reconcile reads that state of the cluster for a PagerDutyIntegration object and makes changes based on the state read +// and what is in the PagerDutyIntegration.Spec +// Note: +// The Controller will requeue the Request to be processed again if the returned error is non-nil or +// Result.Requeue is true, otherwise upon completion it will remove the work from the queue. +func (r *ReconcilePagerDutyIntegration) Reconcile(request reconcile.Request) (reconcile.Result, error) { + r.reqLogger = log.WithValues("Request.Namespace", request.Namespace, "Request.Name", request.Name) + r.reqLogger.Info("Reconciling PagerDutyIntegration") + + // Fetch the PagerDutyIntegration instance + pdi := &pagerdutyv1alpha1.PagerDutyIntegration{} + err := r.client.Get(context.TODO(), request.NamespacedName, pdi) + if err != nil { + if errors.IsNotFound(err) { + // Request object not found, could have been deleted after reconcile request. + // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. + // Return and don't requeue + return r.doNotRequeue() + } + // Error reading the object - requeue the request. + return r.requeueOnErr(err) + } + + matchingClusterDeployments, err := r.getMatchingClusterDeployments(pdi) + if err != nil { + return r.requeueOnErr(err) + } + + if pdi.DeletionTimestamp != nil { + if utils.HasFinalizer(pdi, config.OperatorFinalizer) { + for _, cd := range matchingClusterDeployments.Items { + err := r.handleDelete(pdi, &cd) + if err != nil { + return r.requeueOnErr(err) + } + } + + utils.DeleteFinalizer(pdi, config.OperatorFinalizer) + err = r.client.Update(context.TODO(), pdi) + if err != nil { + return r.requeueOnErr(err) + } + } + return r.doNotRequeue() + } + + if !utils.HasFinalizer(pdi, config.OperatorFinalizer) { + utils.AddFinalizer(pdi, config.OperatorFinalizer) + err := r.client.Update(context.TODO(), pdi) + if err != nil { + return r.requeueOnErr(err) + } + } + + // TODO: Remove all of this migration code in a future release. + // Start migration + const MigrationAnnotation string = "pd.openshift.io/legacy" + for _, cd := range matchingClusterDeployments.Items { + if pdi.Annotations[MigrationAnnotation] != "" { + err := r.handleMigrate(pdi, &cd) + if err != nil { + r.reqLogger.Error( + err, + "Error while trying to migrate legacy resources, this may result in a new PagerDuty Service created for this ClusterDeployment", + "ClusterDeployment.Name", cd.Name, "ClusterDeployment.Namespace", cd.Namespace, + "PagerDutyIntegration.Name", pdi.Name, "PagerDutyIntegration.Namespace", pdi.Namespace, + ) + return r.requeueOnErr(err) + } + } + } + if pdi.Annotations[MigrationAnnotation] != "" { + delete(pdi.Annotations, MigrationAnnotation) + err = r.client.Update(context.TODO(), pdi) + if err != nil { + return r.requeueOnErr(err) + } + } + // End migration + + for _, cd := range matchingClusterDeployments.Items { + if cd.DeletionTimestamp != nil || cd.Labels[config.ClusterDeploymentNoalertsLabel] == "true" { + err := r.handleDelete(pdi, &cd) + if err != nil { + return r.requeueOnErr(err) + } + } else { + err := r.handleCreate(pdi, &cd) + if err != nil { + return r.requeueOnErr(err) + } + } + } + + return r.doNotRequeue() +} + +func (r *ReconcilePagerDutyIntegration) getMatchingClusterDeployments(pdi *pagerdutyv1alpha1.PagerDutyIntegration) (*hivev1.ClusterDeploymentList, error) { + selector, err := metav1.LabelSelectorAsSelector(&pdi.Spec.ClusterDeploymentSelector) + if err != nil { + return nil, err + } + + matchingClusterDeployments := &hivev1.ClusterDeploymentList{} + listOpts := &client.ListOptions{LabelSelector: selector} + err = r.client.List(context.TODO(), matchingClusterDeployments, listOpts) + return matchingClusterDeployments, err +} +func (r *ReconcilePagerDutyIntegration) doNotRequeue() (reconcile.Result, error) { + return reconcile.Result{}, nil +} + +func (r *ReconcilePagerDutyIntegration) requeueOnErr(err error) (reconcile.Result, error) { + return reconcile.Result{}, err +} diff --git a/pkg/controller/clusterdeployment/clusterdeployment_controller_test.go b/pkg/controller/pagerdutyintegration/pagerdutyintegration_controller_test.go similarity index 62% rename from pkg/controller/clusterdeployment/clusterdeployment_controller_test.go rename to pkg/controller/pagerdutyintegration/pagerdutyintegration_controller_test.go index 012eac99..2abc7490 100644 --- a/pkg/controller/clusterdeployment/clusterdeployment_controller_test.go +++ b/pkg/controller/pagerdutyintegration/pagerdutyintegration_controller_test.go @@ -1,4 +1,18 @@ -package clusterdeployment +// Copyright 2020 Red Hat +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pagerdutyintegration import ( "context" @@ -9,6 +23,8 @@ import ( hiveapis "github.com/openshift/hive/pkg/apis" hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" "github.com/openshift/pagerduty-operator/config" + pagerdutyapis "github.com/openshift/pagerduty-operator/pkg/apis" + pagerdutyv1alpha1 "github.com/openshift/pagerduty-operator/pkg/apis/pagerduty/v1alpha1" "github.com/openshift/pagerduty-operator/pkg/kube" mockpd "github.com/openshift/pagerduty-operator/pkg/pagerduty/mock" "github.com/stretchr/testify/assert" @@ -24,16 +40,18 @@ import ( ) const ( - testClusterName = "testCluster" - testNamespace = "testNamespace" - testIntegrationID = "ABC123" - testServiceID = "DEF456" - testAPIKey = "test-pd-api-key" - testEscalationPolicy = "test-escalation-policy" - testResolveTimeout = "300" - testAcknowledgeTimeout = "300" - testOtherSyncSetPostfix = "-something-else" - testsecretReferencesName = "pd-secret" + testPagerDutyIntegrationName = "testPagerDutyIntegration" + testClusterName = "testCluster" + testNamespace = "testNamespace" + testIntegrationID = "ABC123" + testServiceID = "DEF456" + testAPIKey = "test-pd-api-key" + testEscalationPolicy = "test-escalation-policy" + testResolveTimeout = 300 + testAcknowledgeTimeout = 300 + testOtherSyncSetPostfix = "-something-else" + testsecretReferencesName = "pd-secret" + testServicePrefix = "test-service-prefix" ) type SyncSetEntry struct { @@ -89,9 +107,6 @@ func testPDConfigSecret() *corev1.Secret { }, Data: map[string][]byte{ config.PagerDutyAPISecretKey: []byte(testAPIKey), - "ESCALATION_POLICY": []byte(testEscalationPolicy), - "RESOLVE_TIMEOUT": []byte(testResolveTimeout), - "ACKNOWLEDGE_TIMEOUT": []byte(testAcknowledgeTimeout), }, } return s @@ -102,7 +117,7 @@ func testPDConfigMap() *corev1.ConfigMap { cm := &corev1.ConfigMap{ ObjectMeta: metav1.ObjectMeta{ Namespace: testNamespace, - Name: testClusterName + config.ConfigMapPostfix, + Name: config.Name(testServicePrefix, testClusterName, config.ConfigMapSuffix), }, Data: map[string]string{ "INTEGRATION_ID": testIntegrationID, @@ -128,8 +143,9 @@ func testSecret() *corev1.Secret { // testSyncSet returns a SyncSet for an existing testClusterDeployment to use in testing. func testSyncSet() *hivev1.SyncSet { - secret := kube.GeneratePdSecret(testNamespace, config.PagerDutySecretName, testIntegrationID) - ss := kube.GenerateSyncSet(testNamespace, testClusterName+config.SyncSetPostfix, secret) + secretName := config.Name(testServicePrefix, testClusterName, config.SecretSuffix) + secret := kube.GeneratePdSecret(testNamespace, secretName, testIntegrationID) + ss := kube.GenerateSyncSet(testNamespace, testClusterName, secret) return ss } @@ -150,6 +166,32 @@ func testOtherSyncSet() *hivev1.SyncSet { } } +func testPagerDutyIntegration() *pagerdutyv1alpha1.PagerDutyIntegration { + return &pagerdutyv1alpha1.PagerDutyIntegration{ + ObjectMeta: metav1.ObjectMeta{ + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, + }, + Spec: pagerdutyv1alpha1.PagerDutyIntegrationSpec{ + AcknowledgeTimeout: testAcknowledgeTimeout, + ResolveTimeout: testResolveTimeout, + EscalationPolicy: testEscalationPolicy, + ServicePrefix: testServicePrefix, + ClusterDeploymentSelector: metav1.LabelSelector{ + MatchLabels: map[string]string{config.ClusterDeploymentManagedLabel: "true"}, + }, + PagerdutyApiKeySecretRef: corev1.SecretReference{ + Name: config.PagerDutyAPISecretName, + Namespace: config.OperatorNamespace, + }, + TargetSecretRef: corev1.SecretReference{ + Name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), + Namespace: testNamespace, + }, + }, + } +} + // testClusterDeployment returns a fake ClusterDeployment for an installed cluster to use in testing. func testClusterDeployment() *hivev1.ClusterDeployment { labelMap := map[string]string{config.ClusterDeploymentManagedLabel: "true"} @@ -190,11 +232,11 @@ func testNoalertsClusterDeployment() *hivev1.ClusterDeployment { } // deletedClusterDeployment returns a fake deleted ClusterDeployment to use in testing. -func deletedClusterDeployment() *hivev1.ClusterDeployment { +func deletedClusterDeployment(pdiName string) *hivev1.ClusterDeployment { cd := testClusterDeployment() now := metav1.Now() cd.DeletionTimestamp = &now - cd.SetFinalizers([]string{config.OperatorFinalizer}) + cd.SetFinalizers([]string{"pd.managed.openshift.io/" + pdiName}) return cd } @@ -222,8 +264,9 @@ func uninstalledClusterDeployment() *hivev1.ClusterDeployment { return cd } -func TestReconcileClusterDeployment(t *testing.T) { +func TestReconcilePagerDutyIntegration(t *testing.T) { hiveapis.AddToScheme(scheme.Scheme) + pagerdutyapis.AddToScheme(scheme.Scheme) tests := []struct { name string localObjects []runtime.Object @@ -233,19 +276,19 @@ func TestReconcileClusterDeployment(t *testing.T) { verifySecrets func(client.Client, *SecretEntry) bool setupPDMock func(*mockpd.MockClientMockRecorder) }{ - { name: "Test Creating", localObjects: []runtime.Object{ testClusterDeployment(), testPDConfigSecret(), + testPagerDutyIntegration(), }, expectedSyncSets: &SyncSetEntry{ - name: testClusterName + config.SyncSetPostfix, + name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), clusterDeploymentRefName: testClusterName, }, expectedSecrets: &SecretEntry{ - name: testsecretReferencesName, + name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), pagerdutyKey: testIntegrationID, }, verifySyncSets: verifySyncSetExists, @@ -258,9 +301,10 @@ func TestReconcileClusterDeployment(t *testing.T) { { name: "Test Deleting", localObjects: []runtime.Object{ - deletedClusterDeployment(), + deletedClusterDeployment(testPagerDutyIntegrationName), testPDConfigSecret(), testPDConfigMap(), + testPagerDutyIntegration(), }, expectedSyncSets: &SyncSetEntry{}, expectedSecrets: &SecretEntry{}, @@ -273,20 +317,21 @@ func TestReconcileClusterDeployment(t *testing.T) { { name: "Test Deleting with missing ConfigMap", localObjects: []runtime.Object{ - deletedClusterDeployment(), + deletedClusterDeployment(testPagerDutyIntegrationName), testPDConfigSecret(), + testPagerDutyIntegration(), }, expectedSyncSets: &SyncSetEntry{}, expectedSecrets: &SecretEntry{}, verifySyncSets: verifyNoSyncSetExists, verifySecrets: verifyNoSecretExists, - setupPDMock: func(r *mockpd.MockClientMockRecorder) { - }, + setupPDMock: func(r *mockpd.MockClientMockRecorder) {}, }, { - name: "Test Creating (unmanaged with label)", + name: "Test Uninstalled Cluster", localObjects: []runtime.Object{ - unmanagedClusterDeployment(), + uninstalledClusterDeployment(), + testPagerDutyIntegration(), }, expectedSyncSets: &SyncSetEntry{}, expectedSecrets: &SecretEntry{}, @@ -296,33 +341,47 @@ func TestReconcileClusterDeployment(t *testing.T) { }, }, { - name: "Test Creating (unmanaged without label)", + name: "Test Updating", localObjects: []runtime.Object{ - unlabelledClusterDeployment(), + testClusterDeployment(), + testSecret(), + testSyncSet(), + testPDConfigMap(), + testPDConfigSecret(), + testPagerDutyIntegration(), }, - expectedSyncSets: &SyncSetEntry{}, - expectedSecrets: &SecretEntry{}, - verifySyncSets: verifyNoSyncSetExists, - verifySecrets: verifyNoSecretExists, + expectedSyncSets: &SyncSetEntry{ + name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), + clusterDeploymentRefName: testClusterName, + }, + expectedSecrets: &SecretEntry{ + name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), + pagerdutyKey: testIntegrationID, + }, + verifySyncSets: verifySyncSetExists, + verifySecrets: verifySecretExists, setupPDMock: func(r *mockpd.MockClientMockRecorder) { + r.GetIntegrationKey(gomock.Any()).Return(testIntegrationID, nil).Times(1) }, }, { - name: "Test Creating (managed with noalerts)", + name: "Test Creating (unmanaged with label)", localObjects: []runtime.Object{ - testNoalertsClusterDeployment(), + unmanagedClusterDeployment(), + testPDConfigSecret(), + testPagerDutyIntegration(), }, expectedSyncSets: &SyncSetEntry{}, expectedSecrets: &SecretEntry{}, verifySyncSets: verifyNoSyncSetExists, verifySecrets: verifyNoSecretExists, - setupPDMock: func(r *mockpd.MockClientMockRecorder) { - }, + setupPDMock: func(r *mockpd.MockClientMockRecorder) {}, }, { - name: "Test Uninstalled Cluster", + name: "Test Creating (unmanaged without label)", localObjects: []runtime.Object{ - uninstalledClusterDeployment(), + unlabelledClusterDeployment(), + testPagerDutyIntegration(), }, expectedSyncSets: &SyncSetEntry{}, expectedSecrets: &SecretEntry{}, @@ -332,26 +391,16 @@ func TestReconcileClusterDeployment(t *testing.T) { }, }, { - name: "Test Updating", + name: "Test Creating (managed with noalerts)", localObjects: []runtime.Object{ - testClusterDeployment(), - testSecret(), - testSyncSet(), - testPDConfigMap(), - testPDConfigSecret(), - }, - expectedSyncSets: &SyncSetEntry{ - name: testClusterName + config.SyncSetPostfix, - clusterDeploymentRefName: testClusterName, - }, - expectedSecrets: &SecretEntry{ - name: testsecretReferencesName, - pagerdutyKey: testIntegrationID, + testNoalertsClusterDeployment(), + testPagerDutyIntegration(), }, - verifySyncSets: verifySyncSetExists, - verifySecrets: verifySecretExists, + expectedSyncSets: &SyncSetEntry{}, + expectedSecrets: &SecretEntry{}, + verifySyncSets: verifyNoSyncSetExists, + verifySecrets: verifyNoSecretExists, setupPDMock: func(r *mockpd.MockClientMockRecorder) { - r.GetIntegrationKey(gomock.Any()).Return(testIntegrationID, nil).Times(1) }, }, { @@ -359,6 +408,8 @@ func TestReconcileClusterDeployment(t *testing.T) { localObjects: []runtime.Object{ testNoalertsClusterDeployment(), testOtherSyncSet(), + testPagerDutyIntegration(), + testPDConfigSecret(), }, expectedSyncSets: &SyncSetEntry{name: testClusterName + testOtherSyncSetPostfix, clusterDeploymentRefName: testClusterName}, expectedSecrets: &SecretEntry{}, @@ -368,6 +419,7 @@ func TestReconcileClusterDeployment(t *testing.T) { }, }, } + for _, test := range tests { t.Run(test.name, func(t *testing.T) { // Arrange @@ -376,17 +428,23 @@ func TestReconcileClusterDeployment(t *testing.T) { defer mocks.mockCtrl.Finish() - rcd := &ReconcileClusterDeployment{ + rpdi := &ReconcilePagerDutyIntegration{ client: mocks.fakeKubeClient, scheme: scheme.Scheme, pdclient: mocks.mockPDClient, } - // Act - _, err := rcd.Reconcile(reconcile.Request{ + // Act [2x as first exits early after setting finalizer] + _, err := rpdi.Reconcile(reconcile.Request{ NamespacedName: types.NamespacedName{ - Name: testClusterName, - Namespace: testNamespace, + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, + }, + }) + _, err = rpdi.Reconcile(reconcile.Request{ + NamespacedName: types.NamespacedName{ + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, }, }) @@ -407,6 +465,7 @@ func TestRemoveAlertsAfterCreate(t *testing.T) { testOtherSyncSet(), testPDConfigSecret(), testPDConfigMap(), // <-- see comment below + testPagerDutyIntegration(), }) // in order to test the delete, we need to create the pd secret w/ a non-empty SERVICE_ID, which means CreateService won't be called @@ -423,17 +482,23 @@ func TestRemoveAlertsAfterCreate(t *testing.T) { defer mocks.mockCtrl.Finish() - rcd := &ReconcileClusterDeployment{ + rpdi := &ReconcilePagerDutyIntegration{ client: mocks.fakeKubeClient, scheme: scheme.Scheme, pdclient: mocks.mockPDClient, } - // Act (create) - _, err := rcd.Reconcile(reconcile.Request{ + // Act (create) [2x as first exits early after setting finalizer] + _, err := rpdi.Reconcile(reconcile.Request{ NamespacedName: types.NamespacedName{ - Name: testClusterName, - Namespace: testNamespace, + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, + }, + }) + _, err = rpdi.Reconcile(reconcile.Request{ + NamespacedName: types.NamespacedName{ + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, }, }) @@ -447,16 +512,16 @@ func TestRemoveAlertsAfterCreate(t *testing.T) { err = mocks.fakeKubeClient.Get(context.TODO(), types.NamespacedName{Namespace: testNamespace, Name: testClusterName}, clusterDeployment) // Act (delete) [2x because was seeing other SyncSet's getting deleted] - _, err = rcd.Reconcile(reconcile.Request{ + _, err = rpdi.Reconcile(reconcile.Request{ NamespacedName: types.NamespacedName{ - Name: testClusterName, - Namespace: testNamespace, + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, }, }) - _, err = rcd.Reconcile(reconcile.Request{ + _, err = rpdi.Reconcile(reconcile.Request{ NamespacedName: types.NamespacedName{ - Name: testClusterName, - Namespace: testNamespace, + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, }, }) @@ -474,15 +539,16 @@ func TestDeleteSecret(t *testing.T) { mocks := setupDefaultMocks(t, []runtime.Object{ testClusterDeployment(), testPDConfigSecret(), + testPagerDutyIntegration(), }) expectedSyncSets := &SyncSetEntry{ - name: testClusterName + config.SyncSetPostfix, + name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), clusterDeploymentRefName: testClusterName, } expectedSecrets := &SecretEntry{ - name: testsecretReferencesName, + name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), pagerdutyKey: testIntegrationID, } @@ -496,30 +562,39 @@ func TestDeleteSecret(t *testing.T) { defer mocks.mockCtrl.Finish() - rcd := &ReconcileClusterDeployment{ + rpdi := &ReconcilePagerDutyIntegration{ client: mocks.fakeKubeClient, scheme: scheme.Scheme, pdclient: mocks.mockPDClient, } - // Act (create) - _, err := rcd.Reconcile(reconcile.Request{ + // Act (create) [2x as first exits early after setting finalizer] + _, err := rpdi.Reconcile(reconcile.Request{ NamespacedName: types.NamespacedName{ - Name: testClusterName, - Namespace: testNamespace, + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, + }, + }) + _, err = rpdi.Reconcile(reconcile.Request{ + NamespacedName: types.NamespacedName{ + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, }, }) // Remove the secret which is referred by the syncset secret := &corev1.Secret{} - err = mocks.fakeKubeClient.Get(context.TODO(), types.NamespacedName{Namespace: testNamespace, Name: testsecretReferencesName}, secret) + err = mocks.fakeKubeClient.Get(context.TODO(), types.NamespacedName{ + Namespace: testNamespace, + Name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), + }, secret) err = mocks.fakeKubeClient.Delete(context.TODO(), secret) // Act (reconcile again) - _, err = rcd.Reconcile(reconcile.Request{ + _, err = rpdi.Reconcile(reconcile.Request{ NamespacedName: types.NamespacedName{ - Name: testClusterName, - Namespace: testNamespace, + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, }, }) @@ -530,6 +605,139 @@ func TestDeleteSecret(t *testing.T) { }) } +func TestMigration(t *testing.T) { + t.Run("Test Managed Cluster that later sets noalerts label", func(t *testing.T) { + // Arrange + mocks := setupDefaultMocks(t, []runtime.Object{ + testClusterDeployment(), + testPDConfigSecret(), + testMigratoryPagerDutyIntegration(), + testLegacyPDConfigMap(), + testLegacySecret(), + testLegacySyncSet(), + }) + + setupPDMock := func(r *mockpd.MockClientMockRecorder) { + // create (without calling PD) + r.GetIntegrationKey(gomock.Any()).Return(testIntegrationID, nil).Times(1) + + } + setupPDMock(mocks.mockPDClient.EXPECT()) + + defer mocks.mockCtrl.Finish() + + rpdi := &ReconcilePagerDutyIntegration{ + client: mocks.fakeKubeClient, + scheme: scheme.Scheme, + pdclient: mocks.mockPDClient, + } + + // Act [2x as first exists early after setting finalizer] + _, err := rpdi.Reconcile(reconcile.Request{ + NamespacedName: types.NamespacedName{ + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, + }, + }) + _, err = rpdi.Reconcile(reconcile.Request{ + NamespacedName: types.NamespacedName{ + Name: testPagerDutyIntegrationName, + Namespace: config.OperatorNamespace, + }, + }) + + // Assert + assert.NoError(t, err, "Unexpected Error") + assert.True(t, verifySyncSetExists(mocks.fakeKubeClient, &SyncSetEntry{ + name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), + clusterDeploymentRefName: testClusterName, + })) + assert.True(t, verifySecretExists(mocks.fakeKubeClient, &SecretEntry{ + name: config.Name(testServicePrefix, testClusterName, config.SecretSuffix), + pagerdutyKey: testIntegrationID, + })) + assert.True(t, verifySyncSetExists(mocks.fakeKubeClient, &SyncSetEntry{ + name: testClusterName + "-pd-sync", + clusterDeploymentRefName: testClusterName, + }) == false) + assert.True(t, verifySecretExists(mocks.fakeKubeClient, &SecretEntry{ + name: "pd-secret", + pagerdutyKey: testIntegrationID, + }) == false) + + pdi := &pagerdutyv1alpha1.PagerDutyIntegration{} + err = mocks.fakeKubeClient.Get( + context.TODO(), + types.NamespacedName{Name: testPagerDutyIntegrationName, Namespace: config.OperatorNamespace}, + pdi, + ) + assert.NotContains(t, pdi.Annotations, "pd.openshift.io/legacy") + }) +} + +func testMigratoryPagerDutyIntegration() *pagerdutyv1alpha1.PagerDutyIntegration { + pdi := testPagerDutyIntegration() + pdi.Annotations = map[string]string{"pd.openshift.io/legacy": "true"} + return pdi +} + +// testPDConfigMap returns a fake configmap for a deployed cluster for testing. +func testLegacyPDConfigMap() *corev1.ConfigMap { + return &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: testNamespace, + Name: testClusterName + config.ConfigMapSuffix, + }, + Data: map[string]string{ + "INTEGRATION_ID": testIntegrationID, + "SERVICE_ID": testServiceID, + }, + } +} + +// testSecret returns a Secret that will go in the SyncSet for a deployed cluster to use in testing. +func testLegacySecret() *corev1.Secret { + return &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "pd-secret", + Namespace: testNamespace, + }, + Data: map[string][]byte{ + "PAGERDUTY_KEY": []byte(testIntegrationID), + }, + } +} + +// testSyncSet returns a SyncSet for an existing testClusterDeployment to use in testing. +func testLegacySyncSet() *hivev1.SyncSet { + return &hivev1.SyncSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: testClusterName + "-pd-sync", + Namespace: testNamespace, + }, + Spec: hivev1.SyncSetSpec{ + ClusterDeploymentRefs: []corev1.LocalObjectReference{{ + Name: testClusterName, + }}, + SyncSetCommonSpec: hivev1.SyncSetCommonSpec{ + ResourceApplyMode: "Sync", + Secrets: []hivev1.SecretMapping{ + { + SourceRef: hivev1.SecretReference{ + Namespace: testNamespace, + Name: "pd-secret", + }, + TargetRef: hivev1.SecretReference{ + Namespace: "openshift-monitoring", + Name: "pagerduty-api-key", + }, + }, + }, + }, + }, + } +} + // verifySyncSetExists verifies that a SyncSet exists that matches the supplied expected SyncSetEntry. func verifySyncSetExists(c client.Client, expected *SyncSetEntry) bool { ss := hivev1.SyncSet{} @@ -551,7 +759,7 @@ func verifySyncSetExists(c client.Client, expected *SyncSetEntry) bool { if secretReferences == "" { return false } - return string(secretReferences) == testsecretReferencesName + return string(secretReferences) == expected.name } // verifyNoSyncSetExists verifies that there is no SyncSet present that matches the supplied expected SyncSetEntry. @@ -591,7 +799,7 @@ func verifyNoConfigMapExists(c client.Client) bool { } for _, cm := range cmList.Items { - if strings.HasSuffix(cm.Name, config.ConfigMapPostfix) { + if strings.HasSuffix(cm.Name, config.ConfigMapSuffix) { // too bad, found a configmap associated with this operator return false } diff --git a/pkg/controller/syncset/syncset_controller.go b/pkg/controller/syncset/syncset_controller.go deleted file mode 100644 index 72de0546..00000000 --- a/pkg/controller/syncset/syncset_controller.go +++ /dev/null @@ -1,147 +0,0 @@ -// Copyright 2019 RedHat -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package syncset - -import ( - "context" - - "github.com/go-logr/logr" - hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" - "github.com/openshift/pagerduty-operator/config" - pd "github.com/openshift/pagerduty-operator/pkg/pagerduty" - "github.com/openshift/pagerduty-operator/pkg/utils" - "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/runtime" - "sigs.k8s.io/controller-runtime/pkg/client" - "sigs.k8s.io/controller-runtime/pkg/controller" - "sigs.k8s.io/controller-runtime/pkg/handler" - "sigs.k8s.io/controller-runtime/pkg/manager" - "sigs.k8s.io/controller-runtime/pkg/reconcile" - logf "sigs.k8s.io/controller-runtime/pkg/runtime/log" - "sigs.k8s.io/controller-runtime/pkg/source" -) - -var log = logf.Log.WithName("controller_syncset") - -/** -* USER ACTION REQUIRED: This is a scaffold file intended for the user to modify with their own Controller -* business logic. Delete these comments after modifying this file.* - */ - -// Add creates a new SyncSet Controller and adds it to the Manager. The Manager will set fields on the Controller -// and Start it when the Manager is Started. -func Add(mgr manager.Manager) error { - newRec, err := newReconciler(mgr) - if err != nil { - return err - } - - return add(mgr, newRec) -} - -// newReconciler returns a new reconcile.Reconciler -func newReconciler(mgr manager.Manager) (reconcile.Reconciler, error) { - //return &ReconcileSyncSet{client: mgr.GetClient(), scheme: mgr.GetScheme()} - - tempClient, err := client.New(mgr.GetConfig(), client.Options{Scheme: mgr.GetScheme()}) - if err != nil { - return nil, err - } - - // get PD API key from secret - pdAPIKey, err := utils.LoadSecretData(tempClient, config.PagerDutyAPISecretName, config.OperatorNamespace, config.PagerDutyAPISecretKey) - - return &ReconcileSyncSet{ - client: mgr.GetClient(), - scheme: mgr.GetScheme(), - pdclient: pd.NewClient(pdAPIKey), - }, nil -} - -// add adds a new Controller to mgr with r as the reconcile.Reconciler -func add(mgr manager.Manager, r reconcile.Reconciler) error { - // Create a new controller - c, err := controller.New("syncset-controller", mgr, controller.Options{Reconciler: r}) - if err != nil { - return err - } - - // Watch for changes to primary resource SyncSet - err = c.Watch(&source.Kind{Type: &hivev1.SyncSet{}}, &handler.EnqueueRequestForObject{}) - if err != nil { - return err - } - - return nil -} - -var _ reconcile.Reconciler = &ReconcileSyncSet{} - -// ReconcileSyncSet reconciles a SyncSet object -type ReconcileSyncSet struct { - // This client, initialized using mgr.Client() above, is a split client - // that reads objects from the cache and writes to the apiserver - client client.Client - scheme *runtime.Scheme - reqLogger logr.Logger - pdclient pd.Client -} - -// Reconcile reads that state of the cluster for a SyncSet object and makes changes based on the state read -// and what is in the SyncSet.Spec -// TODO(user): Modify this Reconcile function to implement your Controller logic. This example creates -// a Pod as an example -// Note: -// The Controller will requeue the Request to be processed again if the returned error is non-nil or -// Result.Requeue is true, otherwise upon completion it will remove the work from the queue. -func (r *ReconcileSyncSet) Reconcile(request reconcile.Request) (reconcile.Result, error) { - r.reqLogger = log.WithValues("Request.Namespace", request.Namespace, "Request.Name", request.Name) - r.reqLogger.Info("Reconciling SyncSet") - - // Wasn't a pagerduty - if len(request.Name) < len(config.SyncSetPostfix) { - return reconcile.Result{}, nil - } - if request.Name[len(request.Name)-len(config.SyncSetPostfix):len(request.Name)] != config.SyncSetPostfix { - return reconcile.Result{}, nil - } - - isCDCreated, _, err := utils.CheckClusterDeployment(request, r.client, r.reqLogger) - - if err != nil { - // something went wrong, requeue - return reconcile.Result{}, err - } - - // If we don't manage this cluster: log, delete, return - if !isCDCreated { - return reconcile.Result{}, utils.DeleteSyncSet(request.Name, request.Namespace, r.client, r.reqLogger) - } - - // Fetch the SyncSet instance - instance := &hivev1.SyncSet{} - err = r.client.Get(context.TODO(), request.NamespacedName, instance) - if err != nil { - if errors.IsNotFound(err) { - // the SyncSet should exist - return r.recreateSyncSet(request) - } - // something else went wrong - return reconcile.Result{}, err - } - - // SyncSet exists, nothing to do - return reconcile.Result{}, nil -} diff --git a/pkg/controller/syncset/syncset_controller_test.go b/pkg/controller/syncset/syncset_controller_test.go deleted file mode 100644 index 9432c7c8..00000000 --- a/pkg/controller/syncset/syncset_controller_test.go +++ /dev/null @@ -1,271 +0,0 @@ -package syncset - -import ( - "context" - "errors" - "testing" - - "github.com/golang/mock/gomock" - hiveapis "github.com/openshift/hive/pkg/apis" - hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" - "github.com/openshift/pagerduty-operator/config" - - "github.com/openshift/pagerduty-operator/pkg/kube" - mockpd "github.com/openshift/pagerduty-operator/pkg/pagerduty/mock" - "github.com/stretchr/testify/assert" - corev1 "k8s.io/api/core/v1" - kubeerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/kubernetes/scheme" - "sigs.k8s.io/controller-runtime/pkg/client" - fakekubeclient "sigs.k8s.io/controller-runtime/pkg/client/fake" - "sigs.k8s.io/controller-runtime/pkg/reconcile" -) - -const ( - testClusterName = "testCluster" - testNamespace = "testNamespace" - testIntegrationID = "ABC123" - testsecretReferencesName = "pd-secret" -) - -type SyncSetEntry struct { - name string - pdIntegrationID string - clusterDeploymentRefName string -} - -type mocks struct { - fakeKubeClient client.Client - mockCtrl *gomock.Controller - mockPDClient *mockpd.MockClient -} - -func rawToSecret(raw runtime.RawExtension) *corev1.Secret { - decoder := scheme.Codecs.UniversalDecoder(corev1.SchemeGroupVersion) - - obj, _, err := decoder.Decode(raw.Raw, nil, nil) - if err != nil { - // okay, not everything in the syncset is necessarily a secret - return nil - } - s, ok := obj.(*corev1.Secret) - if ok { - return s - } - - return nil -} - -func setupDefaultMocks(t *testing.T, localObjects []runtime.Object) *mocks { - mocks := &mocks{ - fakeKubeClient: fakekubeclient.NewFakeClient(localObjects...), - mockCtrl: gomock.NewController(t), - } - - mocks.mockPDClient = mockpd.NewMockClient(mocks.mockCtrl) - - return mocks -} - -// return a managed ClusterDeployment -func testClusterDeployment() *hivev1.ClusterDeployment { - labelMap := map[string]string{config.ClusterDeploymentManagedLabel: "true"} - cd := hivev1.ClusterDeployment{ - ObjectMeta: metav1.ObjectMeta{ - Name: testClusterName, - Namespace: testNamespace, - Labels: labelMap, - }, - Spec: hivev1.ClusterDeploymentSpec{ - ClusterName: testClusterName, - }, - } - cd.Spec.Installed = true - - return &cd -} - -// return a managed ClusterDeployment with noalerts laabel -func testClusterDeploymentNoalerts() *hivev1.ClusterDeployment { - labelMap := map[string]string{ - config.ClusterDeploymentManagedLabel: "true", - config.ClusterDeploymentNoalertsLabel: "true", - } - cd := hivev1.ClusterDeployment{ - ObjectMeta: metav1.ObjectMeta{ - Name: testClusterName, - Namespace: testNamespace, - Labels: labelMap, - }, - Spec: hivev1.ClusterDeploymentSpec{ - ClusterName: testClusterName, - }, - } - cd.Spec.Installed = true - - return &cd -} - -// return a Secret that will go in the SyncSet for the deployed cluster -func testSecret() *corev1.Secret { - s := &corev1.Secret{ - ObjectMeta: metav1.ObjectMeta{ - Name: "pd-secret", - Namespace: "openshift-monitoring", - }, - Data: map[string][]byte{ - "PAGERDUTY_KEY": []byte(testIntegrationID), - }, - } - return s -} - -// return a SyncSet representing an existng integration -func testSyncSet() *hivev1.SyncSet { - s := testSecret() - return kube.GenerateSyncSet(testNamespace, testClusterName, s) -} - -func TestReconcileSyncSet(t *testing.T) { - hiveapis.AddToScheme(scheme.Scheme) - tests := []struct { - name string - localObjects []runtime.Object - expectedSyncSets *SyncSetEntry - verifySyncSets func(client.Client, *SyncSetEntry) bool - setupPDMock func(*mockpd.MockClientMockRecorder) - }{ - { - name: "Test Recreating when integration already exists in PagerDuty", - localObjects: []runtime.Object{ - testClusterDeployment(), - testSecret(), - }, - expectedSyncSets: &SyncSetEntry{ - name: testClusterName + config.SyncSetPostfix, - pdIntegrationID: testIntegrationID, - clusterDeploymentRefName: testClusterName, - }, - verifySyncSets: verifySyncSetExists, - setupPDMock: func(r *mockpd.MockClientMockRecorder) { - r.GetIntegrationKey(gomock.Any()).Return(testIntegrationID, nil).Times(1) - }, - }, - { - name: "Test [Re]creating when integration doesn't exist in PagerDuty", - localObjects: []runtime.Object{ - testClusterDeployment(), - testSecret(), - }, - expectedSyncSets: &SyncSetEntry{ - name: testClusterName + config.SyncSetPostfix, - pdIntegrationID: testIntegrationID, - clusterDeploymentRefName: testClusterName, - }, - verifySyncSets: verifySyncSetExists, - setupPDMock: func(r *mockpd.MockClientMockRecorder) { - r.CreateService(gomock.Any()).Return(testIntegrationID, nil).Times(1) - r.GetIntegrationKey(gomock.Any()).Return(testIntegrationID, errors.New("Integration not found")).Times(1) - r.GetIntegrationKey(gomock.Any()).Return(testIntegrationID, nil).Times(1) - }, - }, - { - name: "Test SyncSet with no matching ClusterDeployment", - localObjects: []runtime.Object{ - testSecret(), - }, - expectedSyncSets: &SyncSetEntry{}, - verifySyncSets: verifyNoSyncSetExists, - setupPDMock: func(r *mockpd.MockClientMockRecorder) { - }, - }, - { - name: "Test ignore missing SyncSet with noalerts ClusterDeployment", - localObjects: []runtime.Object{ - testClusterDeploymentNoalerts(), - testSecret(), - }, - expectedSyncSets: &SyncSetEntry{}, - verifySyncSets: verifyNoSyncSetExists, - setupPDMock: func(r *mockpd.MockClientMockRecorder) { - }, - }, - { - name: "Test delete SyncSet with noalerts ClusterDeployment", - localObjects: []runtime.Object{ - testClusterDeploymentNoalerts(), - testSyncSet(), - testSecret(), - }, - expectedSyncSets: &SyncSetEntry{}, - verifySyncSets: verifyNoSyncSetExists, - setupPDMock: func(r *mockpd.MockClientMockRecorder) { - }, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - // Arrange - mocks := setupDefaultMocks(t, test.localObjects) - test.setupPDMock(mocks.mockPDClient.EXPECT()) - - defer mocks.mockCtrl.Finish() - - rss := &ReconcileSyncSet{ - client: mocks.fakeKubeClient, - scheme: scheme.Scheme, - pdclient: mocks.mockPDClient, - } - - // Act - _, err := rss.Reconcile(reconcile.Request{ - NamespacedName: types.NamespacedName{ - Name: testClusterName + config.SyncSetPostfix, - Namespace: testNamespace, - }, - }) - - // Assert - assert.NoError(t, err, "Unexpected Error") - assert.True(t, test.verifySyncSets(mocks.fakeKubeClient, test.expectedSyncSets)) - }) - } -} - -func verifySyncSetExists(c client.Client, expected *SyncSetEntry) bool { - ss := hivev1.SyncSet{} - err := c.Get(context.TODO(), - types.NamespacedName{Name: expected.name, Namespace: testNamespace}, - &ss) - if err != nil { - return false - } - - if expected.name != ss.Name { - return false - } - - if expected.clusterDeploymentRefName != ss.Spec.ClusterDeploymentRefs[0].Name { - return false - } - secretReferences := ss.Spec.SyncSetCommonSpec.Secrets[0].SourceRef.Name - if secretReferences == "" { - return false - } - - return string(secretReferences) == testsecretReferencesName -} - -func verifyNoSyncSetExists(c client.Client, expected *SyncSetEntry) bool { - ss := hivev1.SyncSet{} - err := c.Get(context.TODO(), - types.NamespacedName{Name: expected.name, Namespace: testNamespace}, - &ss) - if kubeerrors.IsNotFound(err) { - return true - } - return false -} diff --git a/pkg/controller/syncset/syncset_deleted.go b/pkg/controller/syncset/syncset_deleted.go deleted file mode 100644 index f25094a4..00000000 --- a/pkg/controller/syncset/syncset_deleted.go +++ /dev/null @@ -1,118 +0,0 @@ -// Copyright 2019 RedHat -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package syncset - -import ( - "context" - - hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" - "github.com/openshift/pagerduty-operator/config" - "github.com/openshift/pagerduty-operator/pkg/kube" - pd "github.com/openshift/pagerduty-operator/pkg/pagerduty" - "github.com/openshift/pagerduty-operator/pkg/utils" - corev1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/types" - "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" - "sigs.k8s.io/controller-runtime/pkg/reconcile" -) - -func (r *ReconcileSyncSet) recreateSyncSet(request reconcile.Request) (reconcile.Result, error) { - r.reqLogger.Info("Syncset deleted, regenerating") - - clusterdeployment := &hivev1.ClusterDeployment{} - cdName := request.Name[0 : len(request.Name)-8] - err := r.client.Get(context.TODO(), types.NamespacedName{Namespace: request.Namespace, Name: cdName}, clusterdeployment) - if err != nil { - // Error finding the cluster deployment, requeue - return reconcile.Result{}, err - } - - pdData := &pd.Data{ - ClusterID: clusterdeployment.Spec.ClusterName, - BaseDomain: clusterdeployment.Spec.BaseDomain, - } - pdData.ParsePDConfig(r.client) - pdData.ParseClusterConfig(r.client, request.Namespace, cdName) - - // To prevent scoping issues in the err check below. - var pdIntegrationKey string - recreateCM := false - - pdIntegrationKey, err = r.pdclient.GetIntegrationKey(pdData) - if err != nil { - var createErr error - _, createErr = r.pdclient.CreateService(pdData) - - if createErr != nil { - return reconcile.Result{}, createErr - } - pdIntegrationKey, err = r.pdclient.GetIntegrationKey(pdData) - if err != nil { - return reconcile.Result{}, err - } - recreateCM = true - } - - //check if the secret is already there , if already there , do nothing - secret := &corev1.Secret{} - err = r.client.Get(context.TODO(), types.NamespacedName{Name: config.PagerDutySecretName, Namespace: request.Namespace}, secret) - if err != nil { - if errors.IsNotFound(err) { - secret = kube.GeneratePdSecret(request.Namespace, config.PagerDutySecretName, pdIntegrationKey) - //add SetControllerReference - if err = controllerutil.SetControllerReference(clusterdeployment, secret, r.scheme); err != nil { - r.reqLogger.Error(err, "Error setting controller reference on secret") - return reconcile.Result{}, err - } - if err = r.client.Create(context.TODO(), secret); err != nil { - return reconcile.Result{}, err - } - } - } - - newSS := &hivev1.SyncSet{} - err = r.client.Get(context.TODO(), types.NamespacedName{Name: request.Name + config.SyncSetPostfix, Namespace: request.Namespace}, newSS) - if err != nil { - if errors.IsNotFound(err) { - newSS = kube.GenerateSyncSet(request.Namespace, clusterdeployment.Name, secret) - if err := r.client.Create(context.TODO(), newSS); err != nil { - return reconcile.Result{}, err - } - } - } - - if recreateCM { - cmName := cdName + config.ConfigMapPostfix - err = utils.DeleteConfigMap(cmName, request.Namespace, r.client, r.reqLogger) - if err != nil { - // couldn't find the config map, requeue - return reconcile.Result{}, err - } - - newCM := kube.GenerateConfigMap(request.Namespace, cdName, pdData.ServiceID, pdData.IntegrationID) - if err := r.client.Create(context.TODO(), newCM); err != nil { - if errors.IsAlreadyExists(err) { - if updateErr := r.client.Update(context.TODO(), newCM); updateErr != nil { - return reconcile.Result{}, err - } - return reconcile.Result{}, nil - } - return reconcile.Result{}, err - } - } - - return reconcile.Result{}, nil -} diff --git a/pkg/kube/configmap.go b/pkg/kube/configmap.go index 838610de..afdda983 100644 --- a/pkg/kube/configmap.go +++ b/pkg/kube/configmap.go @@ -15,15 +15,12 @@ package kube import ( - "github.com/openshift/pagerduty-operator/config" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // GenerateConfigMap returns a configmap that can be created with the oc client -func GenerateConfigMap(namespace string, name string, pdServiceID string, pdIntegrationID string) *corev1.ConfigMap { - cmName := name + config.ConfigMapPostfix - +func GenerateConfigMap(namespace string, cmName string, pdServiceID string, pdIntegrationID string) *corev1.ConfigMap { return &corev1.ConfigMap{ ObjectMeta: metav1.ObjectMeta{ Name: cmName, diff --git a/pkg/kube/syncset.go b/pkg/kube/syncset.go index e762f748..0ef7490b 100644 --- a/pkg/kube/syncset.go +++ b/pkg/kube/syncset.go @@ -16,24 +16,21 @@ package kube import ( hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" - "github.com/openshift/pagerduty-operator/config" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // GenerateSyncSet returns a syncset that can be created with the oc client -func GenerateSyncSet(namespace string, name string, secret *corev1.Secret) *hivev1.SyncSet { - ssName := name + config.SyncSetPostfix - +func GenerateSyncSet(namespace string, clusterDeploymentName string, secret *corev1.Secret) *hivev1.SyncSet { return &hivev1.SyncSet{ ObjectMeta: metav1.ObjectMeta{ - Name: ssName, + Name: secret.Name, Namespace: namespace, }, Spec: hivev1.SyncSetSpec{ ClusterDeploymentRefs: []corev1.LocalObjectReference{ { - Name: name, + Name: clusterDeploymentName, }, }, SyncSetCommonSpec: hivev1.SyncSetCommonSpec{ @@ -46,7 +43,7 @@ func GenerateSyncSet(namespace string, name string, secret *corev1.Secret) *hive }, TargetRef: hivev1.SecretReference{ Namespace: "openshift-monitoring", - Name: config.PagerDutySecretName, + Name: secret.Name, }, }, }, diff --git a/pkg/pagerduty/service.go b/pkg/pagerduty/service.go index 37e2ee34..81170c0e 100644 --- a/pkg/pagerduty/service.go +++ b/pkg/pagerduty/service.go @@ -44,7 +44,7 @@ func getConfigMapKey(data map[string]string, key string) (string, error) { return retString, nil } -func getSecretKey(data map[string][]byte, key string) (string, error) { +func GetSecretKey(data map[string][]byte, key string) (string, error) { if _, ok := data[key]; !ok { errorStr := fmt.Sprintf("%v does not exist", key) return "", errors.New(errorStr) @@ -112,10 +112,10 @@ func NewClient(APIKey string) Client { // Data describes the data that is needed for PagerDuty api calls type Data struct { - escalationPolicyID string - autoResolveTimeout uint - acknowledgeTimeOut uint - servicePrefix string + EscalationPolicyID string + AutoResolveTimeout uint + AcknowledgeTimeOut uint + ServicePrefix string APIKey string ClusterID string BaseDomain string @@ -124,55 +124,10 @@ type Data struct { IntegrationID string } -// ParsePDConfig parses the PD secret and stores it in the struct -func (data *Data) ParsePDConfig(osc client.Client) error { - - pdAPISecret := &corev1.Secret{} - err := osc.Get(context.TODO(), types.NamespacedName{Namespace: config.OperatorNamespace, Name: config.PagerDutyAPISecretName}, pdAPISecret) - if err != nil { - return err - } - - data.APIKey, err = getSecretKey(pdAPISecret.Data, config.PagerDutyAPISecretKey) - if err != nil { - return err - } - - data.escalationPolicyID, err = getSecretKey(pdAPISecret.Data, "ESCALATION_POLICY") - if err != nil { - return err - } - - autoResolveTimeoutStr, err := getSecretKey(pdAPISecret.Data, "RESOLVE_TIMEOUT") - if err != nil { - return err - } - data.autoResolveTimeout, err = convertStrToUint(autoResolveTimeoutStr) - if err != nil { - return err - } - - acknowledgeTimeStr, err := getSecretKey(pdAPISecret.Data, "ACKNOWLEDGE_TIMEOUT") - if err != nil { - return err - } - data.acknowledgeTimeOut, err = convertStrToUint(acknowledgeTimeStr) - if err != nil { - return err - } - - data.servicePrefix, err = getSecretKey(pdAPISecret.Data, "SERVICE_PREFIX") - if err != nil { - data.servicePrefix = "osd" - } - - return nil -} - // ParseClusterConfig parses the cluster specific config map and stores the IDs in the data struct -func (data *Data) ParseClusterConfig(osc client.Client, namespace string, name string) error { +func (data *Data) ParseClusterConfig(osc client.Client, namespace string, cmName string) error { pdAPIConfigMap := &corev1.ConfigMap{} - err := osc.Get(context.TODO(), types.NamespacedName{Namespace: namespace, Name: name + config.ConfigMapPostfix}, pdAPIConfigMap) + err := osc.Get(context.TODO(), types.NamespacedName{Namespace: namespace, Name: cmName}, pdAPIConfigMap) if err != nil { return err } @@ -213,17 +168,17 @@ func (c *SvcClient) GetIntegrationKey(data *Data) (string, error) { // CreateService creates a service in pagerduty for the specified clusterid and returns the service key func (c *SvcClient) CreateService(data *Data) (string, error) { - escalationPolicy, err := c.PdClient.GetEscalationPolicy(string(data.escalationPolicyID), nil) + escalationPolicy, err := c.PdClient.GetEscalationPolicy(string(data.EscalationPolicyID), nil) if err != nil { return "", errors.New("Escalation policy not found in PagerDuty") } clusterService := pdApi.Service{ - Name: data.servicePrefix + "-" + data.ClusterID + "." + data.BaseDomain + "-hive-cluster", + Name: data.ServicePrefix + "-" + data.ClusterID + "." + data.BaseDomain + "-hive-cluster", Description: data.ClusterID + " - A managed hive created cluster", EscalationPolicy: *escalationPolicy, - AutoResolveTimeout: &data.autoResolveTimeout, - AcknowledgementTimeout: &data.acknowledgeTimeOut, + AutoResolveTimeout: &data.AutoResolveTimeout, + AcknowledgementTimeout: &data.AcknowledgeTimeOut, AlertCreation: "create_alerts_and_incidents", IncidentUrgencyRule: &pdApi.IncidentUrgencyRule{ Type: "constant", diff --git a/pkg/utils/utils.go b/pkg/utils/utils.go index f2289bf3..a4f49e91 100644 --- a/pkg/utils/utils.go +++ b/pkg/utils/utils.go @@ -2,18 +2,15 @@ package utils import ( "context" - "strings" "github.com/go-logr/logr" hivev1 "github.com/openshift/hive/pkg/apis/hive/v1" - "github.com/openshift/pagerduty-operator/config" v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/util/sets" "sigs.k8s.io/controller-runtime/pkg/client" - "sigs.k8s.io/controller-runtime/pkg/reconcile" ) // HasFinalizer returns true if the given object has the given finalizer @@ -40,59 +37,6 @@ func DeleteFinalizer(object metav1.Object, finalizer string) { object.SetFinalizers(finalizers.List()) } -// CheckClusterDeployment returns true if the ClusterDeployment is watched by this operator -func CheckClusterDeployment(request reconcile.Request, client client.Client, reqLogger logr.Logger) (bool, *hivev1.ClusterDeployment, error) { - - // remove SyncSetPostfix from name to lookup the ClusterDeployment - cdName := strings.Replace(request.NamespacedName.Name, config.SyncSetPostfix, "", 1) - cdNamespace := request.NamespacedName.Namespace - - clusterDeployment := &hivev1.ClusterDeployment{} - err := client.Get(context.TODO(), types.NamespacedName{Name: cdName, Namespace: cdNamespace}, clusterDeployment) - - if err != nil { - if errors.IsNotFound(err) { - // Request object not found, could have been deleted after reconcile request. - // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. - // Return and don't requeue - reqLogger.Info("No matching cluster deployment found, ignoring") - return false, clusterDeployment, nil - } - // Error finding the cluster deployment, requeue - return false, clusterDeployment, err - } - - if clusterDeployment.DeletionTimestamp != nil { - return false, clusterDeployment, nil - } - - if !clusterDeployment.Spec.Installed { - return false, clusterDeployment, nil - } - - if val, ok := clusterDeployment.GetLabels()[config.ClusterDeploymentManagedLabel]; ok { - if val != "true" { - reqLogger.Info("Is not a managed cluster") - return false, clusterDeployment, nil - } - } else { - // Managed tag is not present which implies it is not a managed cluster - reqLogger.Info("Is not a managed cluster") - return false, clusterDeployment, nil - } - - // Return if alerts are disabled on the cluster - if val, ok := clusterDeployment.GetLabels()[config.ClusterDeploymentNoalertsLabel]; ok { - if val == "true" { - reqLogger.Info("Managed cluster with Alerts disabled", "Namespace", request.Namespace, "Name", request.Name) - return false, clusterDeployment, nil - } - } - - // made it this far so it's both managed and has alerts enabled - return true, clusterDeployment, nil -} - // DeleteConfigMap deletes a ConfigMap func DeleteConfigMap(name string, namespace string, client client.Client, reqLogger logr.Logger) error { configmap := &v1.ConfigMap{}