Conversation
webvictim
left a comment
There was a problem hiding this comment.
Can we have an extra example file showing how to configure ACM?
annotations:
service:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: sslMaybe with a commented out service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" for an extra example of how to deploy an internal LB?
| auth_service: | ||
| authentication: | ||
| type: local | ||
| second_factor: on | ||
| webauthn: | ||
| rp_id: teleport.example.com |
There was a problem hiding this comment.
I guess as part of the revamp we're no longer going to just let people set authentication.secondFactor: on in the values and write this config for them automatically? I can see this being jarring for people who have an existing setup and are upgrading.
There was a problem hiding this comment.
Well, the old version is still working. I just tried to emphasize the config passthrough whenever possible.
There was a problem hiding this comment.
Got it, I wasn't 100% sure. Great news!
| @@ -0,0 +1,36 @@ | |||
| # This example shows how to configure Teleport auth pods to authenticate to AWS by assuming an IAM role | |||
| # instead of inheriting ambiant credentials from the EC2 node or relying on a service account key. | |||
There was a problem hiding this comment.
| # instead of inheriting ambiant credentials from the EC2 node or relying on a service account key. | |
| # instead of inheriting ambient credentials from the EC2 node or relying on a service account key. |
| # EKS in-tree LoadBalancer usage is discouraged by AWS but remains the easiest and most used | ||
| # way to create a LoadBalancer on an EKS cluster as it is working by default and does not | ||
| # require the cluster administrator to install and maintain additional components. |
There was a problem hiding this comment.
This is still mapped to an ELB (i.e. classic LB) by default when running in Kubernetes, right?
I kinda think we should just remove all examples of doing this as ELBs are so old...
There was a problem hiding this comment.
The chart will put the annotation asking for an NLB when running in chartMode: aws.
| auth: | ||
| resources: | ||
| requests: | ||
| cpu: "3" | ||
| memory: "6GiB" | ||
| limits: | ||
| cpu: "3" | ||
| memory: "6GiB" | ||
|
|
||
| proxy: | ||
| resources: | ||
| requests: | ||
| cpu: "1" | ||
| memory: "4GiB" | ||
| limits: | ||
| cpu: "1" | ||
| memory: "4GiB" |
There was a problem hiding this comment.
| auth: | |
| resources: | |
| requests: | |
| cpu: "3" | |
| memory: "6GiB" | |
| limits: | |
| cpu: "3" | |
| memory: "6GiB" | |
| proxy: | |
| resources: | |
| requests: | |
| cpu: "1" | |
| memory: "4GiB" | |
| limits: | |
| cpu: "1" | |
| memory: "4GiB" | |
| auth: | |
| resources: | |
| requests: | |
| cpu: "25m" | |
| memory: "25Mi" | |
| limits: | |
| cpu: "4" | |
| memory: "4Gi" | |
| proxy: | |
| resources: | |
| requests: | |
| cpu: "25m" | |
| memory: "25Mi" | |
| limits: | |
| cpu: "4" | |
| memory: "4Gi" |
These match the limits we set by default for Teleport Cloud, so might be a little more useful if people just copy/paste them blindly...
There was a problem hiding this comment.
Mismatched memory requests and limits 😭
| type: etcd | ||
| peers: [ "https://etcd-0.etcd-headless.teleport.svc.cluster.local:2379", "https://etcd-1.etcd-headless.teleport.svc.cluster.local:2379", "https://etcd-2.etcd-headless.teleport.svc.cluster.local:2379" ] | ||
| prefix: /teleport/ |
There was a problem hiding this comment.
Might be a bit more readable like this? I don't know which I prefer 🤷♂️
| type: etcd | |
| peers: [ "https://etcd-0.etcd-headless.teleport.svc.cluster.local:2379", "https://etcd-1.etcd-headless.teleport.svc.cluster.local:2379", "https://etcd-2.etcd-headless.teleport.svc.cluster.local:2379" ] | |
| prefix: /teleport/ | |
| type: etcd | |
| peers: | |
| - "https://etcd-0.etcd-headless.teleport.svc.cluster.local:2379" | |
| - "https://etcd-1.etcd-headless.teleport.svc.cluster.local:2379" | |
| - "https://etcd-2.etcd-headless.teleport.svc.cluster.local:2379" | |
| prefix: /teleport/ |
marcoandredinis
left a comment
There was a problem hiding this comment.
Examples look good
Did we try all of them?
Not yet, I'm waiting for the main PR merge and SEs feedback to run all those examples. For now, the config is just copied from GitHub discussions. |
| # AWS LoadBalancer Controller is the AWS recommended way to create AWS LoadBalancer resources | ||
| # sending traffic to an EKS cluster. However, this requires cluster administrators to install | ||
| # and manage the controller. | ||
| # See |
c83b7d1 to
3ec9f8f
Compare
72d8059 to
462328e
Compare
Part of RFD-0096
This PR adds value examples for various setups. The goal is to show users how the chart can be used.