-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proposal] API Design for EtcdCluster resource #62
Comments
I'd prefer this to move to some additional key. Not to overwhelm the base |
+ storage:
+ volumeClaimTemplate:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.PersistentVolumeClaimSpec Ready k8s type
+ storageClassName: gp3
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 10Gi
+ emptyDir: {} # core.v1.EmptyDirVolumeSource Ready k8s type need to discuss if we want to enable all possible types of volumes, or restrict them to some safe subset. Because in fact + podDisruptionBudget:
+ maxUnavailable: 1 # intstr.IntOrString
+ minAvailable: 2
+ selectorLabels: # If not set, the operator will use the labels from the EtcdCluster
+ env: prod we need binary flag: deploy pdb or not. Not just to have the values in the spec, I think. Or |
+ serviceSpec:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type not enough as business logic supposes to have several type of services. Basic one - headless, and probably addiitonal one for clients? Should we give user option to choose if he wants the second? Or just deploy them both every time? |
+ extraArgs: # map[string]string
+ arg1: "value1"
+ arg2: "value2"
+ extraEnvs: # []core.v1.EnvVar Ready k8s type
+ - name: MY_ENV
+ value: "my-value" Should we have here some additional validation? Let's say that we pass |
As of PDB spec, I think user of etcd cluster does not need such complex spec. Operator can configure |
then you need to also disable / or enable creation of custom service account... |
Also I want to have a good hierarchy: ---
-apiVersion: etcd.aenix.io/v1alpha1
+apiVersion: etcd.aenix.io/v1alpha2
kind: EtcdCluster
metadata:
name: test
namespace: default
spec:
image: "quay.io/coreos/etcd:v3.5.12"
replicas: 3
+ podSpec:
+ imagePullSecrets: # core.v1.LocalObjectReference Ready k8s type
- name: myregistrykey
+ serviceAccountName: default
+ podMetadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ resources: # core.v1.ResourceRequirements Ready k8s type
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ limits:
+ cpu: 200m
+ memory: 200Mi
+ affinity: {} # core.v1.Affinity Ready k8s type
+ nodeSelector: {} # map[string]string
+ tolerations: [] # core.v1.Toleration Ready k8s type
+ securityContext: {} # core.v1.PodSecurityContext Ready k8s type
+ priorityClassName: "low"
+ topologySpreadConstraints: [] # core.v1.TopologySpreadConstraint Ready k8s type
+ terminationGracePeriodSeconds: 30 # int64
+ schedulerName: "default-scheduler"
+ runtimeClassName: "legacy"
+ service-headless:
+ serviceSpec:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ service-main:
+ serviceSpec:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ storage:
+ volumeClaimTemplate:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.PersistentVolumeClaimSpec Ready k8s type
+ storageClassName: gp3
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 10Gi
+ emptyDir: {} # core.v1.EmptyDirVolumeSource Ready k8s type something like this. Then in the editor I will be able to easily collapse / expand particular blocks with the technical details. I want to emphasize that I think that it is really great idea to separate the main business logic parameters (like number of replicas) from the deep technical things like service labels |
Updated spec addressing all comments ---
apiVersion: etcd.aenix.io/v1alpha1
kind: EtcdCluster
metadata:
name: test
namespace: default
spec:
image: "quay.io/coreos/etcd:v3.5.12"
replicas: 3
+ podSpec:
+ imagePullPolicy: "Always" # core.v1.PullPolicy Ready k8s type
+ imagePullSecrets: # core.v1.LocalObjectReference Ready k8s type
+ - name: myregistrykey
+ podMetadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ resources: # core.v1.ResourceRequirements Ready k8s type
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ limits:
+ cpu: 200m
+ memory: 200Mi
+ affinity: {} # core.v1.Affinity Ready k8s type
+ nodeSelector: {} # map[string]string
+ tolerations: [] # core.v1.Toleration Ready k8s type
+ securityContext: {} # core.v1.PodSecurityContext Ready k8s type
+ priorityClassName: "low"
+ topologySpreadConstraints: [] # core.v1.TopologySpreadConstraint Ready k8s type
+ terminationGracePeriodSeconds: 30 # int64
+ schedulerName: "default-scheduler"
+ runtimeClassName: "legacy"
+ extraArgs: # map[string]string
+ arg1: "value1"
+ arg2: "value2"
+ extraEnvs: # []core.v1.EnvVar Ready k8s type
+ - name: MY_ENV
+ value: "my-value"
+ serviceAccountSpec: # TBD. How to represent it? Do we need ability to specify existing service account?
+ create: true
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ serviceSpec:
+ client:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ headless:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ podDisruptionBudget:
+ maxUnavailable: 1 # intstr.IntOrString
+ minAvailable: 2
+ selectorLabels: # If not set, the operator will use the labels from the EtcdCluster
+ env: prod
+ readinessGates: [] # core.v1.PodReadinessGate Ready k8s type
+ storage: # Discussed separately
+ volumeClaimTemplate:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.PersistentVolumeClaimSpec Ready k8s type
+ storageClassName: gp3
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 10Gi
+ emptyDir: {} # core.v1.EmptyDirVolumeSource Ready k8s type
- storage:
- persistence: true # default: true, immutable
- storageClass: local-path
- size: 10Gi
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-03-06T18:39:45Z"
status: "True"
type: Ready |
I don't thins it's necessary. I guess user can make anytinng that he want here
I guess it's better to have ability to configure all fields in PDB. For example my cluster has policy that check PDB and require percent value in minAvailable |
#67 Implements storage spec from this proposal. All discussions about storage part can now be addressed here |
no way. Anything that does not break etcd cluster.
yes, I agree. I never argued with it. I am pointing different thing. |
I think we need |
@AlexGluck I don't think so, because it is much more important thing than affinity, tolerations etc. But let it be. |
Updated spec after merge of #69. Other spec fields is implemented and removed from this diff ---
apiVersion: etcd.aenix.io/v1alpha1
kind: EtcdCluster
metadata:
name: test
namespace: default
spec:
image: "quay.io/coreos/etcd:v3.5.12"
replicas: 3
+ serviceAccountSpec: # TBD. How to represent it? Do we need ability to specify existing service account?
+ create: true
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ serviceSpec:
+ client:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ headless:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ podDisruptionBudget:
+ maxUnavailable: 1 # intstr.IntOrString
+ minAvailable: 2
+ selectorLabels: # If not set, the operator will use the labels from the EtcdCluster
+ env: prod
+ readinessGates: [] # core.v1.PodReadinessGate Ready k8s type |
I was taking a look at the specification, and noticed this: etcd-operator/api/v1alpha1/etcdcluster_types.go Lines 28 to 32 in 755f630
In the context of
|
Hello! Thanks for the comment. I think that we don’t want to limit the
user. Also we could use this field during cluster auto scaling. For
instance we had 3 instances, and we want to scale to 5. It is logical to
change this field in operator and put in CR, right ? Another thing that if
you have 3 AZ, 5 or 7 instances won’t be a good choice for you, maybe 6?
…On Sat, 30 Mar 2024 at 08:28, Dario Tranchitella ***@***.***> wrote:
I was taking a look at the specification, and noticed this:
https://github.com/aenix-io/etcd-operator/blob/755f6305b9d0b72884e769775276f3f76cd2ffb1/api/v1alpha1/etcdcluster_types.go#L28-L32
In the context of etcd an even number of instances is non-sense,
wondering if we could take advantage of the
+kubebuilder:validation:MultipleOf kubebuilder validation marker:
specifies that this field must have a numeric value that's a multiple of
this one.
—
Reply to this email directly, view it on GitHub
<#62 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAWHDXAPAVGKLZEYNFOMNF3Y2ZLSFAVCNFSM6AAAAABFDBNP6SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRXHE2TEOJTGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Damn, you're right 😁 |
Okay I'm going to close this proposal in favor #109. |
Here is a design proposal for EtcdCluster resource that will cover more cases of real usage. Requesting for comments
Inspired by https://docs.victoriametrics.com/operator/api/#vmstorage
Cover scope of #61
The text was updated successfully, but these errors were encountered: