You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a customer i would like to have as limited permissions of the IAM role attached to the worker nodes as possible. Currently if Pod has access to the EC2 instance metadata, it will be able to assume the instance's attached IAM role. Hence with this role it is granted permissions that the nodes have. The behavior does not depend on the IMDS version and can happen with both, v1 and v2.
At the current stage the capa-iam-operator configures the permissions as following: https://github.com/giantswarm/capa-iam-operator/blob/master/pkg/iam/nodes_template.go. The set is very broad and it is definitely advised to narrow it down, because currently it raises security concerns that are understandable. We have to remember about such workloads as ebs-csi-driver or aws-cloud-controller-manager which might require broader permissions due to their underlying processes.
Option to narrow down the permissions is the safest approach which should not affect the actual workloads of customers running on the clusters. The topic of accessing the instance metadata itself will be taken up separately in following issue: #3796
Acceptance criteria:
Perform a deep dive by removing ALL permissions granted to the worker nodes IAM role and identify the actually required permissions, such that the nodes are fully operational with no impact on workloads running in the cluster.
Apply new, narrowed down permissions set in capa-iam-operator
The text was updated successfully, but these errors were encountered:
I found that removing the IAM policy from the nodes-* role (worker nodes) works totally fine. PVC/PV/ingress-nginx+NLB+aws-load-balancer-controller/kubelet are all fine.
However, there's still the CCM and CSI which may require AWS credentials. Those still work right now because they have a node selector to only run on control plane nodes (for CSI, it's Deployment/ebs-csi-controller that uses credentials, not DaemonSet/ebs-csi-node).
Since this issue is only about worker nodes, I think we can go ahead removing the IAM policy starting from a certain release.
I found that removing the IAM policy from the nodes-* role (worker nodes) works totally fine. PVC/PV/ingress-nginx+NLB+aws-load-balancer-controller/kubelet are all fine.
E2E tests had found more issues in "edge" cases: ECR image pull requires ecr:[...] and Cilium ENI mode needs access to attach network interfaces and describe some stuff. Fixed in the above PRs. E2E tests are passing, so I think we're done here for workers once the releases are out.
As a customer i would like to have as limited permissions of the IAM role attached to the worker nodes as possible. Currently if Pod has access to the EC2 instance metadata, it will be able to assume the instance's attached IAM role. Hence with this role it is granted permissions that the nodes have. The behavior does not depend on the IMDS version and can happen with both, v1 and v2.
At the current stage the
capa-iam-operator
configures the permissions as following: https://github.com/giantswarm/capa-iam-operator/blob/master/pkg/iam/nodes_template.go. The set is very broad and it is definitely advised to narrow it down, because currently it raises security concerns that are understandable. We have to remember about such workloads asebs-csi-driver
oraws-cloud-controller-manager
which might require broader permissions due to their underlying processes.Option to narrow down the permissions is the safest approach which should not affect the actual workloads of customers running on the clusters. The topic of accessing the instance metadata itself will be taken up separately in following issue: #3796
Acceptance criteria:
capa-iam-operator
The text was updated successfully, but these errors were encountered: