From 42c938e0a20fd06f550fbf48dc38e75166922e6c Mon Sep 17 00:00:00 2001 From: Harish Udaiya Kumar Date: Thu, 18 Apr 2019 13:31:04 -0700 Subject: [PATCH 1/7] Initial draft of documentation for Cluster creation using cross account role assumption --- docs/roleassumption.md | 107 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 107 insertions(+) create mode 100644 docs/roleassumption.md diff --git a/docs/roleassumption.md b/docs/roleassumption.md new file mode 100644 index 0000000000..eeabfc08f5 --- /dev/null +++ b/docs/roleassumption.md @@ -0,0 +1,107 @@ +**Creating clusters using cross account role assumption using kiam** + +This document outlines the list of steps to create the target cluster via cross account role assumption using KIAM. +KIAM lets the controller pod(s) to assume an AWS role that enables them create AWS resources necessary to create an +operational cluster. This way we wouldn't have to mount any AWS credentials or load environment variables to +supply AWS credentials to the CAPA controller. This is automatically taken care by the KIAM components. +Note: If you dont want to use KIAM and rather want to mount the credentials as secrets, you may still achieve cross +account role assumption by using multiple profiles. (TODO add this section at the bottom) + +**Glossory** + +* Trusting Account - The account where the cluster is created +* Trusted Account - The AWS account where the CAPA controller runs. The "Trusting" account trusts the "Trusted" account +to create a new cluster in "Trusting" account. + +**Assumptions:** +1. The CAPA controllers are running in 1 AWS account and you want to create the target cluster in another AWS account. +(We could also use this doc to create a cluster in the same account) +2. This assumes that you start with no existing clusters. + +**High level steps** + +1. Creating a bootstrap/management cluster in AWS - This can be done by running the phases in clusterctl + * Uses the existing provider components yaml +2. Setting up cross account roles +3. Deploying the Kiam server/agent +4. Create the target cluster (through kiam) + * Uses different provider components with no secrets and annotation to indicate the IAM Role to assume. + +**Creating the bootstrap cluster in AWS** +Using clusterctl command we can create a new cluster in AWS which in turn will act as the +bootstrap cluster to create the target cluster(in a different AWS account. This can be achieved by using the phases in +clusterctl to perform all the steps except the pivoting. This will provide us with a bare-bones functioning cluster that +we can use as a bootstrap cluster. +To begin with follow the steps in this getting started guide +(https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/master/docs/getting-started.md) to setup the environment +except for creating the actual cluster. Instead follow the steps below to create the cluster. + +create a new cluster using kind for bootstrapping purpose by running: +```$xslt +kind create cluster +``` +and get its kube config path by running +``` +export KIND_KUBECONFIG=`kind get kubeconfig-path` +``` + +Use the following commands to create new (bootstrap) cluster in AWS +``` +kubectl alpha phases apply-cluster-api-components --provider-components cmd/clusterctl/examples/aws/out/provider-components.yaml \ +--kubeconfig $KIND_KUBECONFIG + +kubectl alpha phases apply-cluster --cluster cmd/clusterctl/examples/aws/out/cluster.yaml --kubeconfig $KIND_KUBECONFIG + +kubectl alpha phases apply-machines --machines cmd/clusterctl/examples/aws/out/machines.yaml +--kubeconfig $KIND_KUBECONFIG + +kubectl alpha phases get-kubeconfig --provider aws --cluster-name --kubeconfig $KIND_KUBECONFIG +export AWS_KUBECONFIG=`pwd`/kubeconfig + +kubectl alpha phases apply-addons -a cmd/clusterctl/examples/aws/out/addons.yaml +--kubeconfig $AWS_KUBECONFIG + +``` + +Verify that all the pods in the kube-system namespace are running smoothly. Also you may remove the additional node in +the machines example yaml since we are only interested in running the controllers that runs in control plane node +(although its not required to make any changes there). You can destroy your local kind cluster by running +```$xslt +make kind-reset +``` + +**Setting up cross account roles:** + +In this step we will new roles/policy in total across 2 different AWS accounts. +First lets start by creating the roles in the account where the AWS controller runs. Lets call this the "trusted" +account since this account is trusted by the "trusting" account where the cluster is created. Following the directions +posted here:https://github.com/uswitch/kiam/blob/master/docs/IAM.md create a "kiam_server" role +in AWS that only has a single managed policy with a single permission "sts:AssumeRole". Also add a trust policy on the + "kiam_server" role to include the role attached to the Control plane instance as a trusted entity. This looks something + like this: + ```$xslt +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam:::role/control-plane.cluster-api-provider-aws.sigs.k8s.io" + }, + "Action": "sts:AssumeRole" + } + ] +} +``` + +Next we must establish a link between this "kiam_server" and the role on trusting account that includes the permissions to +create new cluster. This is done by using the similar steps as shown above. +Sign in to the trusting account. +This requires running the clusterawsadm cli to create a new stack on the trusting account where the target cluster is +created + +```clusterawsadm alpha bootstrap create-stack``` + +The last role to be created is the one that is attached to the control plane that runs the CAPA controllers. +This role must have a minimal set of permissions + From d990e2e0f2061ff427611c5dacfc8e66c26cc518 Mon Sep 17 00:00:00 2001 From: harishspqr Date: Thu, 18 Apr 2019 14:28:58 -0700 Subject: [PATCH 2/7] Update roleassumption.md Complete the document. --- docs/roleassumption.md | 324 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 305 insertions(+), 19 deletions(-) diff --git a/docs/roleassumption.md b/docs/roleassumption.md index eeabfc08f5..270e764f67 100644 --- a/docs/roleassumption.md +++ b/docs/roleassumption.md @@ -1,4 +1,4 @@ -**Creating clusters using cross account role assumption using kiam** +# Creating clusters using cross account role assumption using kiam This document outlines the list of steps to create the target cluster via cross account role assumption using KIAM. KIAM lets the controller pod(s) to assume an AWS role that enables them create AWS resources necessary to create an @@ -7,18 +7,17 @@ supply AWS credentials to the CAPA controller. This is automatically taken care Note: If you dont want to use KIAM and rather want to mount the credentials as secrets, you may still achieve cross account role assumption by using multiple profiles. (TODO add this section at the bottom) -**Glossory** +### Glossory -* Trusting Account - The account where the cluster is created -* Trusted Account - The AWS account where the CAPA controller runs. The "Trusting" account trusts the "Trusted" account -to create a new cluster in "Trusting" account. +* Target Account - The account where the cluster is created +* Source Account - The AWS account where the CAPA controller runs. -**Assumptions:** +## Assumptions 1. The CAPA controllers are running in 1 AWS account and you want to create the target cluster in another AWS account. (We could also use this doc to create a cluster in the same account) 2. This assumes that you start with no existing clusters. -**High level steps** +## High level steps 1. Creating a bootstrap/management cluster in AWS - This can be done by running the phases in clusterctl * Uses the existing provider components yaml @@ -27,7 +26,7 @@ to create a new cluster in "Trusting" account. 4. Create the target cluster (through kiam) * Uses different provider components with no secrets and annotation to indicate the IAM Role to assume. -**Creating the bootstrap cluster in AWS** +## 1. Creating the bootstrap cluster in AWS Using clusterctl command we can create a new cluster in AWS which in turn will act as the bootstrap cluster to create the target cluster(in a different AWS account. This can be achieved by using the phases in clusterctl to perform all the steps except the pivoting. This will provide us with a bare-bones functioning cluster that @@ -70,11 +69,10 @@ the machines example yaml since we are only interested in running the controller make kind-reset ``` -**Setting up cross account roles:** +## 2. Setting up cross account roles In this step we will new roles/policy in total across 2 different AWS accounts. -First lets start by creating the roles in the account where the AWS controller runs. Lets call this the "trusted" -account since this account is trusted by the "trusting" account where the cluster is created. Following the directions +First lets start by creating the roles in the account where the AWS controller runs. Following the directions posted here:https://github.com/uswitch/kiam/blob/master/docs/IAM.md create a "kiam_server" role in AWS that only has a single managed policy with a single permission "sts:AssumeRole". Also add a trust policy on the "kiam_server" role to include the role attached to the Control plane instance as a trusted entity. This looks something @@ -94,14 +92,302 @@ in AWS that only has a single managed policy with a single permission "sts:Assum } ``` -Next we must establish a link between this "kiam_server" and the role on trusting account that includes the permissions to -create new cluster. This is done by using the similar steps as shown above. -Sign in to the trusting account. -This requires running the clusterawsadm cli to create a new stack on the trusting account where the target cluster is -created - +Next we must establish a link between this "kiam_server" role on source AWS account and the role on target AWS account that has the permissions to create new cluster. +Begin by running the clusterawsadm cli to create a new stack on the target account where the target cluster is created. Make sure you use the credentials for target AWS account for running this step. ```clusterawsadm alpha bootstrap create-stack``` +Then sign-in to the target AWS account to establish the link as mentioned above. Create a new Role with the permission policy set to "controllers.cluster-api-provider-aws.sigs.k8s.io". Lets name this role "cluster-api" for future reference. Add a new trust relationship to include the "kiam_server" role from the source account as trusted entity. This is shown below: +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam:::role/kserver" + }, + "Action": "sts:AssumeRole" + } + ] +} +``` +## 3. Deploying the Kiam server & agent +By Now, your target cluster must be up & running. Make sure your KUBECONFIG pointing to the cluster in the target account. + +Create new secrets using the steps outlined here: https://github.com/uswitch/kiam/blob/master/docs/TLS.md +Apply the manifest shown below: +Make sure you update the argument to include your source AWS account "--assume-role-arn=arn:aws:iam:::role/kiam_server" +server.yaml +``` +--- +apiVersion: extensions/v1beta1 +kind: DaemonSet +metadata: + namespace: kube-system + name: kiam-server +spec: + template: + metadata: + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "9620" + labels: + app: kiam + role: server + spec: + serviceAccountName: kiam-server + nodeSelector: + node-role.kubernetes.io/master: "" + tolerations: + - operator: "Exists" + volumes: + - name: ssl-certs + hostPath: + # for AWS linux or RHEL distros + # path: /etc/pki/ca-trust/extracted/pem/ + path: /etc/ssl/certs/ + - name: tls + secret: + secretName: kiam-server-tls + hostNetwork: true + containers: + - name: kiam + image: quay.io/uswitch/kiam:master # USE A TAGGED RELEASE IN PRODUCTION + imagePullPolicy: Always + command: + - /kiam + args: + - server + - --json-log + - --level=warn + - --bind=0.0.0.0:443 + - --cert=/etc/kiam/tls/server.pem + - --key=/etc/kiam/tls/server-key.pem + - --ca=/etc/kiam/tls/ca.pem + - --role-base-arn-autodetect + - --assume-role-arn=arn:aws:iam:::role/kiam_server + - --sync=1m + - --prometheus-listen-addr=0.0.0.0:9620 + - --prometheus-sync-interval=5s + volumeMounts: + - mountPath: /etc/ssl/certs + name: ssl-certs + - mountPath: /etc/kiam/tls + name: tls + livenessProbe: + exec: + command: + - /kiam + - health + - --cert=/etc/kiam/tls/server.pem + - --key=/etc/kiam/tls/server-key.pem + - --ca=/etc/kiam/tls/ca.pem + - --server-address=127.0.0.1:443 + - --gateway-timeout-creation=1s + - --timeout=5s + initialDelaySeconds: 10 + periodSeconds: 10 + timeoutSeconds: 10 + readinessProbe: + exec: + command: + - /kiam + - health + - --cert=/etc/kiam/tls/server.pem + - --key=/etc/kiam/tls/server-key.pem + - --ca=/etc/kiam/tls/ca.pem + - --server-address=127.0.0.1:443 + - --gateway-timeout-creation=1s + - --timeout=5s + initialDelaySeconds: 3 + periodSeconds: 10 + timeoutSeconds: 10 +--- +apiVersion: v1 +kind: Service +metadata: + name: kiam-server + namespace: kube-system +spec: + clusterIP: None + selector: + app: kiam + role: server + ports: + - name: grpclb + port: 443 + targetPort: 443 + protocol: TCP +``` +agent.yaml +``` +apiVersion: extensions/v1beta1 +kind: DaemonSet +metadata: + namespace: kube-system + name: kiam-agent +spec: + template: + metadata: + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "9620" + labels: + app: kiam + role: agent + spec: + hostNetwork: true + dnsPolicy: ClusterFirstWithHostNet + tolerations: + - operator: "Exists" + volumes: + - name: ssl-certs + hostPath: + # for AWS linux or RHEL distros + #path: /etc/pki/ca-trust/extracted/pem/ + path: /etc/ssl/certs/ + - name: tls + secret: + secretName: kiam-agent-tls + - name: xtables + hostPath: + path: /run/xtables.lock + type: FileOrCreate + containers: + - name: kiam + securityContext: + capabilities: + add: ["NET_ADMIN"] + image: quay.io/uswitch/kiam:master # USE A TAGGED RELEASE IN PRODUCTION + imagePullPolicy: Always + command: + - /kiam + args: + - agent + - --iptables + - --host-interface=cali+ + - --json-log + - --port=8181 + - --cert=/etc/kiam/tls/agent.pem + - --key=/etc/kiam/tls/agent-key.pem + - --ca=/etc/kiam/tls/ca.pem + - --server-address=kiam-server:443 + - --prometheus-listen-addr=0.0.0.0:9620 + - --prometheus-sync-interval=5s + - --gateway-timeout-creation=1s + env: + - name: HOST_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + volumeMounts: + - mountPath: /etc/ssl/certs + name: ssl-certs + - mountPath: /etc/kiam/tls + name: tls + - mountPath: /var/run/xtables.lock + name: xtables + livenessProbe: + httpGet: + path: /ping + port: 8181 + initialDelaySeconds: 3 + periodSeconds: 3 +``` +server-rbac.yaml +``` +--- +kind: ServiceAccount +apiVersion: v1 +metadata: + name: kiam-server + namespace: kube-system +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRole +metadata: + name: kiam-read +rules: +- apiGroups: + - "" + resources: + - namespaces + - pods + verbs: + - watch + - get + - list +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRoleBinding +metadata: + name: kiam-read +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: kiam-read +subjects: +- kind: ServiceAccount + name: kiam-server + namespace: kube-system +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRole +metadata: + name: kiam-write +rules: +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRoleBinding +metadata: + name: kiam-write +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: kiam-write +subjects: +- kind: ServiceAccount + name: kiam-server + namespace: kube-system +``` +After Deploying the above components make sure that the kiam_server & kiam_agent pods are up & running. + +## 4. Create the target cluster +Make sure you create copy of the "aws/out" directory called "out2". To create the target cluster we must update the provider_components.yaml generated in the out2 directory. +1. Remove the credentials secret added at the bottom of the provider_components.yaml and do not mount the secret +2. Add the following annotation to the template of aws-provider-controller-manager stateful set to specify the new role that was created in target account. +``` + annotations: + iam.amazonaws.com/role: arn:aws:iam:::role/cluster-api +``` +3. Also add this below annotation to the "aws-provider-system" namespace +``` + annotations: + iam.amazonaws.com/permitted: ".*" +``` + +Create a new cluster using the steps similar to the one used to create the source cluster. They are as follows: +export SOURCE_KUBECONFIG= +``` +kubectl alpha phases apply-cluster-api-components --provider-components cmd/clusterctl/examples/aws/out2/provider-components.yaml \ +--kubeconfig $SOURCE_KUBECONFIG -The last role to be created is the one that is attached to the control plane that runs the CAPA controllers. -This role must have a minimal set of permissions +kubectl alpha phases apply-cluster --cluster cmd/clusterctl/examples/aws/out2/cluster.yaml --kubeconfig $SOURCE_KUBECONFIG +kubectl alpha phases apply-machines --machines cmd/clusterctl/examples/aws/out2/machines.yaml +--kubeconfig $SOURCE_KUBECONFIG + +kubectl alpha phases get-kubeconfig --provider aws --cluster-name --kubeconfig $SOURCE_KUBECONFIG +export TARGET_KUBECONFIG=`pwd`/kubeconfig + +kubectl alpha phases apply-addons -a cmd/clusterctl/examples/aws/out2/addons.yaml +--kubeconfig $TARGET_KUBECONFIG + +``` +This creates the new cluster in the target AWS account. From 9d42f9dd5749409f7cfaabb61c004ac5f00c4318 Mon Sep 17 00:00:00 2001 From: harishspqr Date: Fri, 19 Apr 2019 12:27:20 -0700 Subject: [PATCH 3/7] cleanup the documentation for roleassumption --- docs/roleassumption.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/roleassumption.md b/docs/roleassumption.md index 270e764f67..252bc3189c 100644 --- a/docs/roleassumption.md +++ b/docs/roleassumption.md @@ -5,7 +5,7 @@ KIAM lets the controller pod(s) to assume an AWS role that enables them create A operational cluster. This way we wouldn't have to mount any AWS credentials or load environment variables to supply AWS credentials to the CAPA controller. This is automatically taken care by the KIAM components. Note: If you dont want to use KIAM and rather want to mount the credentials as secrets, you may still achieve cross -account role assumption by using multiple profiles. (TODO add this section at the bottom) +account role assumption by using multiple profiles. ### Glossory From af20116a6e59695777d3a4f81b40104f43961526 Mon Sep 17 00:00:00 2001 From: harishspqr Date: Fri, 19 Apr 2019 15:06:28 -0700 Subject: [PATCH 4/7] Resolved the comments: role assumption documentation. --- docs/roleassumption.md | 82 +++++++++++++++++++++++++----------------- 1 file changed, 49 insertions(+), 33 deletions(-) diff --git a/docs/roleassumption.md b/docs/roleassumption.md index 252bc3189c..e6b68f899b 100644 --- a/docs/roleassumption.md +++ b/docs/roleassumption.md @@ -1,6 +1,6 @@ -# Creating clusters using cross account role assumption using kiam +# Creating clusters using cross account role assumption using KIAM -This document outlines the list of steps to create the target cluster via cross account role assumption using KIAM. +This document outlines the list of steps to create the target cluster via cross account role assumption using [KIAM]https://github.com/uswitch/kiam. KIAM lets the controller pod(s) to assume an AWS role that enables them create AWS resources necessary to create an operational cluster. This way we wouldn't have to mount any AWS credentials or load environment variables to supply AWS credentials to the CAPA controller. This is automatically taken care by the KIAM components. @@ -13,17 +13,16 @@ account role assumption by using multiple profiles. * Source Account - The AWS account where the CAPA controller runs. ## Assumptions -1. The CAPA controllers are running in 1 AWS account and you want to create the target cluster in another AWS account. -(We could also use this doc to create a cluster in the same account) +1. The CAPA controllers are running in an AWS account and you want to create the target cluster in another AWS account. 2. This assumes that you start with no existing clusters. ## High level steps 1. Creating a bootstrap/management cluster in AWS - This can be done by running the phases in clusterctl * Uses the existing provider components yaml -2. Setting up cross account roles -3. Deploying the Kiam server/agent -4. Create the target cluster (through kiam) +2. Setting up cross account IAM roles +3. Deploying the KIAM server/agent +4. Create the target cluster (through KIAM) * Uses different provider components with no secrets and annotation to indicate the IAM Role to assume. ## 1. Creating the bootstrap cluster in AWS @@ -31,53 +30,60 @@ Using clusterctl command we can create a new cluster in AWS which in turn will a bootstrap cluster to create the target cluster(in a different AWS account. This can be achieved by using the phases in clusterctl to perform all the steps except the pivoting. This will provide us with a bare-bones functioning cluster that we can use as a bootstrap cluster. -To begin with follow the steps in this getting started guide -(https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/master/docs/getting-started.md) to setup the environment +To begin with follow the steps in this [getting started guide] +https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/master/docs/getting-started.md to setup the environment except for creating the actual cluster. Instead follow the steps below to create the cluster. create a new cluster using kind for bootstrapping purpose by running: -```$xslt -kind create cluster + +```$sh +kind create cluster --name ``` + and get its kube config path by running -``` + +```$sh export KIND_KUBECONFIG=`kind get kubeconfig-path` ``` -Use the following commands to create new (bootstrap) cluster in AWS -``` +Use the following commands to create new (bootstrap) cluster in AWS. +```$sh kubectl alpha phases apply-cluster-api-components --provider-components cmd/clusterctl/examples/aws/out/provider-components.yaml \ --kubeconfig $KIND_KUBECONFIG kubectl alpha phases apply-cluster --cluster cmd/clusterctl/examples/aws/out/cluster.yaml --kubeconfig $KIND_KUBECONFIG +``` -kubectl alpha phases apply-machines --machines cmd/clusterctl/examples/aws/out/machines.yaml ---kubeconfig $KIND_KUBECONFIG +We only need to create the control plane on the cluster running in AWS source account. Since the example includes definition for a worker node, you may delete it. + +```$sh +kubectl alpha phases apply-machines --machines cmd/clusterctl/examples/aws/out/machines.yaml --kubeconfig $KIND_KUBECONFIG kubectl alpha phases get-kubeconfig --provider aws --cluster-name --kubeconfig $KIND_KUBECONFIG -export AWS_KUBECONFIG=`pwd`/kubeconfig -kubectl alpha phases apply-addons -a cmd/clusterctl/examples/aws/out/addons.yaml ---kubeconfig $AWS_KUBECONFIG +export AWS_KUBECONFIG=$PWD/kubeconfig +kubectl alpha phases apply-addons -a cmd/clusterctl/examples/aws/out/addons.yaml --kubeconfig $AWS_KUBECONFIG ``` Verify that all the pods in the kube-system namespace are running smoothly. Also you may remove the additional node in the machines example yaml since we are only interested in running the controllers that runs in control plane node (although its not required to make any changes there). You can destroy your local kind cluster by running -```$xslt -make kind-reset + +```$sh +kind delete cluster --name ``` ## 2. Setting up cross account roles -In this step we will new roles/policy in total across 2 different AWS accounts. -First lets start by creating the roles in the account where the AWS controller runs. Following the directions -posted here:https://github.com/uswitch/kiam/blob/master/docs/IAM.md create a "kiam_server" role +In this step we will create new roles/policy in across 2 different AWS accounts. +Let us start by creating the roles in the account where the AWS controller runs. Following the directions +posted in [KIAM repo]https://github.com/uswitch/kiam/blob/master/docs/IAM.md create a "kiam_server" role in AWS that only has a single managed policy with a single permission "sts:AssumeRole". Also add a trust policy on the "kiam_server" role to include the role attached to the Control plane instance as a trusted entity. This looks something like this: - ```$xslt + + ```$json { "Version": "2012-10-17", "Statement": [ @@ -93,10 +99,13 @@ in AWS that only has a single managed policy with a single permission "sts:Assum ``` Next we must establish a link between this "kiam_server" role on source AWS account and the role on target AWS account that has the permissions to create new cluster. -Begin by running the clusterawsadm cli to create a new stack on the target account where the target cluster is created. Make sure you use the credentials for target AWS account for running this step. +Begin by running the clusterawsadm cli to create a new stack on the target account where the target cluster is created. Make sure you use the credentials for target AWS account before creating the stack. + ```clusterawsadm alpha bootstrap create-stack``` + Then sign-in to the target AWS account to establish the link as mentioned above. Create a new Role with the permission policy set to "controllers.cluster-api-provider-aws.sigs.k8s.io". Lets name this role "cluster-api" for future reference. Add a new trust relationship to include the "kiam_server" role from the source account as trusted entity. This is shown below: -``` + +```$json { "Version": "2012-10-17", "Statement": [ @@ -110,13 +119,15 @@ Then sign-in to the target AWS account to establish the link as mentioned above. ] } ``` -## 3. Deploying the Kiam server & agent -By Now, your target cluster must be up & running. Make sure your KUBECONFIG pointing to the cluster in the target account. -Create new secrets using the steps outlined here: https://github.com/uswitch/kiam/blob/master/docs/TLS.md +## 3. Deploying the KIAM server & agent +By now, your target cluster must be up & running. Make sure your KUBECONFIG pointing to the cluster in the target account. + +Create new secrets using the steps outlined [here]https://github.com/uswitch/kiam/blob/master/docs/TLS.md Apply the manifest shown below: Make sure you update the argument to include your source AWS account "--assume-role-arn=arn:aws:iam:::role/kiam_server" server.yaml + ``` --- apiVersion: extensions/v1beta1 @@ -218,7 +229,9 @@ spec: targetPort: 443 protocol: TCP ``` + agent.yaml + ``` apiVersion: extensions/v1beta1 kind: DaemonSet @@ -293,7 +306,9 @@ spec: initialDelaySeconds: 3 periodSeconds: 3 ``` + server-rbac.yaml + ``` --- kind: ServiceAccount @@ -356,7 +371,8 @@ subjects: name: kiam-server namespace: kube-system ``` -After Deploying the above components make sure that the kiam_server & kiam_agent pods are up & running. + +After deploying the above components make sure that the kiam_server & kiam_agent pods are up & running. ## 4. Create the target cluster Make sure you create copy of the "aws/out" directory called "out2". To create the target cluster we must update the provider_components.yaml generated in the out2 directory. @@ -380,13 +396,13 @@ kubectl alpha phases apply-cluster-api-components --provider-components cmd/clus kubectl alpha phases apply-cluster --cluster cmd/clusterctl/examples/aws/out2/cluster.yaml --kubeconfig $SOURCE_KUBECONFIG -kubectl alpha phases apply-machines --machines cmd/clusterctl/examples/aws/out2/machines.yaml +kubectl alpha phases apply-machines --machines cmd/clusterctl/examples/aws/out2/machines.yaml \ --kubeconfig $SOURCE_KUBECONFIG kubectl alpha phases get-kubeconfig --provider aws --cluster-name --kubeconfig $SOURCE_KUBECONFIG export TARGET_KUBECONFIG=`pwd`/kubeconfig -kubectl alpha phases apply-addons -a cmd/clusterctl/examples/aws/out2/addons.yaml +kubectl alpha phases apply-addons -a cmd/clusterctl/examples/aws/out2/addons.yaml \ --kubeconfig $TARGET_KUBECONFIG ``` From 4bd54fddd46388469cf8bec5c8b54b26b0c5e640 Mon Sep 17 00:00:00 2001 From: harishspqr Date: Fri, 19 Apr 2019 15:11:58 -0700 Subject: [PATCH 5/7] Fix minor issues - roleassumption.md --- docs/roleassumption.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/docs/roleassumption.md b/docs/roleassumption.md index e6b68f899b..82aac01973 100644 --- a/docs/roleassumption.md +++ b/docs/roleassumption.md @@ -83,6 +83,8 @@ in AWS that only has a single managed policy with a single permission "sts:Assum "kiam_server" role to include the role attached to the Control plane instance as a trusted entity. This looks something like this: + In "kiam_server" role (Source AWS account): + ```$json { "Version": "2012-10-17", @@ -105,6 +107,8 @@ Begin by running the clusterawsadm cli to create a new stack on the target accou Then sign-in to the target AWS account to establish the link as mentioned above. Create a new Role with the permission policy set to "controllers.cluster-api-provider-aws.sigs.k8s.io". Lets name this role "cluster-api" for future reference. Add a new trust relationship to include the "kiam_server" role from the source account as trusted entity. This is shown below: +In "controllers.cluster-api-provider-aws.sigs.k8s.io" role(target AWS account) + ```$json { "Version": "2012-10-17", @@ -389,8 +393,9 @@ Make sure you create copy of the "aws/out" directory called "out2". To create th ``` Create a new cluster using the steps similar to the one used to create the source cluster. They are as follows: -export SOURCE_KUBECONFIG= ``` +export SOURCE_KUBECONFIG= + kubectl alpha phases apply-cluster-api-components --provider-components cmd/clusterctl/examples/aws/out2/provider-components.yaml \ --kubeconfig $SOURCE_KUBECONFIG From 58b8187f0072a4a933e07960e9ba0857c69bf7f1 Mon Sep 17 00:00:00 2001 From: harishspqr Date: Mon, 22 Apr 2019 16:23:40 -0700 Subject: [PATCH 6/7] resolve more comments to roleassumption.md --- docs/roleassumption.md | 36 +++++++++++++++++++++--------------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/docs/roleassumption.md b/docs/roleassumption.md index 82aac01973..79f827ea16 100644 --- a/docs/roleassumption.md +++ b/docs/roleassumption.md @@ -1,6 +1,6 @@ # Creating clusters using cross account role assumption using KIAM -This document outlines the list of steps to create the target cluster via cross account role assumption using [KIAM]https://github.com/uswitch/kiam. +This document outlines the list of steps to create the target cluster via cross account role assumption using [KIAM](https://github.com/uswitch/kiam). KIAM lets the controller pod(s) to assume an AWS role that enables them create AWS resources necessary to create an operational cluster. This way we wouldn't have to mount any AWS credentials or load environment variables to supply AWS credentials to the CAPA controller. This is automatically taken care by the KIAM components. @@ -9,29 +9,29 @@ account role assumption by using multiple profiles. ### Glossory -* Target Account - The account where the cluster is created -* Source Account - The AWS account where the CAPA controller runs. +* Management cluster - The cluster that runs in AWS and is used to create target clusters in different AWS accounts +* Target account - The account where the target cluster is created +* Source account - The AWS account where the CAPA controllers for management cluster runs. -## Assumptions +## Goals 1. The CAPA controllers are running in an AWS account and you want to create the target cluster in another AWS account. 2. This assumes that you start with no existing clusters. ## High level steps -1. Creating a bootstrap/management cluster in AWS - This can be done by running the phases in clusterctl +1. Creating a management cluster in AWS - This can be done by running the phases in clusterctl * Uses the existing provider components yaml 2. Setting up cross account IAM roles 3. Deploying the KIAM server/agent 4. Create the target cluster (through KIAM) * Uses different provider components with no secrets and annotation to indicate the IAM Role to assume. -## 1. Creating the bootstrap cluster in AWS +## 1. Creating the management cluster in AWS Using clusterctl command we can create a new cluster in AWS which in turn will act as the -bootstrap cluster to create the target cluster(in a different AWS account. This can be achieved by using the phases in +management cluster to create the target cluster(in a different AWS account. This can be achieved by using the phases in clusterctl to perform all the steps except the pivoting. This will provide us with a bare-bones functioning cluster that -we can use as a bootstrap cluster. -To begin with follow the steps in this [getting started guide] -https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/master/docs/getting-started.md to setup the environment +we can use as a management cluster. +To begin with follow the steps in this [getting started guide](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/master/docs/getting-started.md) to setup the environment except for creating the actual cluster. Instead follow the steps below to create the cluster. create a new cluster using kind for bootstrapping purpose by running: @@ -46,7 +46,7 @@ and get its kube config path by running export KIND_KUBECONFIG=`kind get kubeconfig-path` ``` -Use the following commands to create new (bootstrap) cluster in AWS. +Use the following commands to create new management cluster in AWS. ```$sh kubectl alpha phases apply-cluster-api-components --provider-components cmd/clusterctl/examples/aws/out/provider-components.yaml \ --kubeconfig $KIND_KUBECONFIG @@ -78,7 +78,7 @@ kind delete cluster --name In this step we will create new roles/policy in across 2 different AWS accounts. Let us start by creating the roles in the account where the AWS controller runs. Following the directions -posted in [KIAM repo]https://github.com/uswitch/kiam/blob/master/docs/IAM.md create a "kiam_server" role +posted in [KIAM repo](https://github.com/uswitch/kiam/blob/master/docs/IAM.md) create a "kiam_server" role in AWS that only has a single managed policy with a single permission "sts:AssumeRole". Also add a trust policy on the "kiam_server" role to include the role attached to the Control plane instance as a trusted entity. This looks something like this: @@ -127,7 +127,7 @@ In "controllers.cluster-api-provider-aws.sigs.k8s.io" role(target AWS account) ## 3. Deploying the KIAM server & agent By now, your target cluster must be up & running. Make sure your KUBECONFIG pointing to the cluster in the target account. -Create new secrets using the steps outlined [here]https://github.com/uswitch/kiam/blob/master/docs/TLS.md +Create new secrets using the steps outlined [here](https://github.com/uswitch/kiam/blob/master/docs/TLS.md) Apply the manifest shown below: Make sure you update the argument to include your source AWS account "--assume-role-arn=arn:aws:iam:::role/kiam_server" server.yaml @@ -376,10 +376,16 @@ subjects: namespace: kube-system ``` -After deploying the above components make sure that the kiam_server & kiam_agent pods are up & running. +After deploying the above components make sure that the kiam_server and kiam_agent pods are up and running. ## 4. Create the target cluster -Make sure you create copy of the "aws/out" directory called "out2". To create the target cluster we must update the provider_components.yaml generated in the out2 directory. +Make sure you create copy of the "aws/out" directory called "out2". To create the target cluster we must update the provider_components.yaml generated in the out2 directory as shown below (to be run from the repository root directory): + +``` +cp cmd/clusterctl/examples/aws/out cmd/clusterctl/examples/aws/out2 +vi cmd/clusterctl/examples/aws/out2/provider-components.yaml +``` + 1. Remove the credentials secret added at the bottom of the provider_components.yaml and do not mount the secret 2. Add the following annotation to the template of aws-provider-controller-manager stateful set to specify the new role that was created in target account. ``` From 846cf29df51f36ddeb5327745f1667f31070942a Mon Sep 17 00:00:00 2001 From: harishspqr Date: Tue, 23 Apr 2019 13:24:13 -0700 Subject: [PATCH 7/7] Resolve more comments - roleassumption.md --- docs/roleassumption.md | 44 ++++++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 23 deletions(-) diff --git a/docs/roleassumption.md b/docs/roleassumption.md index 79f827ea16..3d42242c3b 100644 --- a/docs/roleassumption.md +++ b/docs/roleassumption.md @@ -7,7 +7,7 @@ supply AWS credentials to the CAPA controller. This is automatically taken care Note: If you dont want to use KIAM and rather want to mount the credentials as secrets, you may still achieve cross account role assumption by using multiple profiles. -### Glossory +### Glossary * Management cluster - The cluster that runs in AWS and is used to create target clusters in different AWS accounts * Target account - The account where the target cluster is created @@ -48,22 +48,22 @@ export KIND_KUBECONFIG=`kind get kubeconfig-path` Use the following commands to create new management cluster in AWS. ```$sh -kubectl alpha phases apply-cluster-api-components --provider-components cmd/clusterctl/examples/aws/out/provider-components.yaml \ +clusterctl alpha phases apply-cluster-api-components --provider-components cmd/clusterctl/examples/aws/out/provider-components.yaml \ --kubeconfig $KIND_KUBECONFIG -kubectl alpha phases apply-cluster --cluster cmd/clusterctl/examples/aws/out/cluster.yaml --kubeconfig $KIND_KUBECONFIG +clusterctl alpha phases apply-cluster --cluster cmd/clusterctl/examples/aws/out/cluster.yaml --kubeconfig $KIND_KUBECONFIG ``` We only need to create the control plane on the cluster running in AWS source account. Since the example includes definition for a worker node, you may delete it. ```$sh -kubectl alpha phases apply-machines --machines cmd/clusterctl/examples/aws/out/machines.yaml --kubeconfig $KIND_KUBECONFIG +clusterctl alpha phases apply-machines --machines cmd/clusterctl/examples/aws/out/machines.yaml --kubeconfig $KIND_KUBECONFIG -kubectl alpha phases get-kubeconfig --provider aws --cluster-name --kubeconfig $KIND_KUBECONFIG +clusterctl alpha phases get-kubeconfig --provider aws --cluster-name --kubeconfig $KIND_KUBECONFIG export AWS_KUBECONFIG=$PWD/kubeconfig -kubectl alpha phases apply-addons -a cmd/clusterctl/examples/aws/out/addons.yaml --kubeconfig $AWS_KUBECONFIG +kubectl apply -f cmd/clusterctl/examples/aws/out/addons.yaml --kubeconfig $AWS_KUBECONFIG ``` Verify that all the pods in the kube-system namespace are running smoothly. Also you may remove the additional node in @@ -134,7 +134,7 @@ server.yaml ``` --- -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: DaemonSet metadata: namespace: kube-system @@ -166,7 +166,7 @@ spec: hostNetwork: true containers: - name: kiam - image: quay.io/uswitch/kiam:master # USE A TAGGED RELEASE IN PRODUCTION + image: quay.io/uswitch/kiam:v3.2 imagePullPolicy: Always command: - /kiam @@ -237,7 +237,7 @@ spec: agent.yaml ``` -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: DaemonSet metadata: namespace: kube-system @@ -274,7 +274,7 @@ spec: securityContext: capabilities: add: ["NET_ADMIN"] - image: quay.io/uswitch/kiam:master # USE A TAGGED RELEASE IN PRODUCTION + image: quay.io/uswitch/kiam:v3.2 imagePullPolicy: Always command: - /kiam @@ -321,7 +321,7 @@ metadata: name: kiam-server namespace: kube-system --- -apiVersion: rbac.authorization.k8s.io/v1beta1 +apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kiam-read @@ -336,7 +336,7 @@ rules: - get - list --- -apiVersion: rbac.authorization.k8s.io/v1beta1 +apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kiam-read @@ -349,7 +349,7 @@ subjects: name: kiam-server namespace: kube-system --- -apiVersion: rbac.authorization.k8s.io/v1beta1 +apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kiam-write @@ -362,7 +362,7 @@ rules: - create - patch --- -apiVersion: rbac.authorization.k8s.io/v1beta1 +apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kiam-write @@ -399,22 +399,20 @@ vi cmd/clusterctl/examples/aws/out2/provider-components.yaml ``` Create a new cluster using the steps similar to the one used to create the source cluster. They are as follows: -``` +```$sh export SOURCE_KUBECONFIG= -kubectl alpha phases apply-cluster-api-components --provider-components cmd/clusterctl/examples/aws/out2/provider-components.yaml \ +clusterctl alpha phases apply-cluster-api-components --provider-components cmd/clusterctl/examples/aws/out2/provider-components.yaml \ --kubeconfig $SOURCE_KUBECONFIG -kubectl alpha phases apply-cluster --cluster cmd/clusterctl/examples/aws/out2/cluster.yaml --kubeconfig $SOURCE_KUBECONFIG +kubectl -f apply cmd/clusterctl/examples/aws/out2/cluster.yaml --kubeconfig $SOURCE_KUBECONFIG -kubectl alpha phases apply-machines --machines cmd/clusterctl/examples/aws/out2/machines.yaml \ ---kubeconfig $SOURCE_KUBECONFIG +kubectl apply -f cmd/clusterctl/examples/aws/out2/machines.yaml --kubeconfig $SOURCE_KUBECONFIG -kubectl alpha phases get-kubeconfig --provider aws --cluster-name --kubeconfig $SOURCE_KUBECONFIG -export TARGET_KUBECONFIG=`pwd`/kubeconfig +clusterctl alpha phases get-kubeconfig --provider aws --cluster-name --kubeconfig $SOURCE_KUBECONFIG +export KUBECONFIG=$PWD/kubeconfig -kubectl alpha phases apply-addons -a cmd/clusterctl/examples/aws/out2/addons.yaml \ ---kubeconfig $TARGET_KUBECONFIG +kubectl apply -f cmd/clusterctl/examples/aws/out2/addons.yaml ``` This creates the new cluster in the target AWS account.