-
Notifications
You must be signed in to change notification settings - Fork 369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support serviceAccountSelector for NetworkPolicy #2927
Comments
NPL means the "NodePortLocal" feature in the context of Antrea, but I am pretty sure that it is not what you are referring to here. I think you mean NetworkPolicies, or more precisely Antrea-native NetworkPolicies. I think I understand the request, but I am not sure why you wouldn't use a common label for all these workloads (which share the same serviceAccount) so that you can define a single NetworkPolicy which applies to / selects all of them? However, I see that Calico actually supports the feature you are describing: https://docs.projectcalico.org/security/service-accounts. I'm tagging @abhiraut to see if he has thoughts about this. |
Ops, thanks for pointing it out, it's NetworkPolicy. |
Thanks for raising this issue. there are folks interested in upstream network policy group to support this.. https://docs.google.com/document/d/1Q_iI26PEEsU7seyIOExxo5LdFbeAsXI_A2W_ShIQ9_8/edit#heading=h.vuvj3ejktgmi @GraysonWu is also actively thinking on this topic so we can support this sooner with Antrea-native policies. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment, or this will be closed in 90 days |
Fixes antrea-io#2927 This PR added `serviceAccount` field support to ACNP. It uses Namespace and Name to specify a ServiceAccount and all Pods with this ServiceAccount will be selected as workloads. It could be used in egress `to`, ingress `from` and `appliedTo` of both policy and single rule. To implement this feature, this PR also added a custom label to all Pods internally, which looks like: `internal.antrea.io/service-account:[ServiceAccountName]`. And when process ACNP, `serviceAccount` will be translate to a `GroupSelector` with `Namespace` and `PodSelector` to select Pods we need. Signed-off-by: wgrayson <[email protected]>
Fixes #2927 This PR added `serviceAccount` field support to ACNP. It uses Namespace and Name to specify a ServiceAccount and all Pods with this ServiceAccount will be selected as workloads. It could be used in egress `to`, ingress `from` and `appliedTo` of both policy and single rule. To implement this feature, this PR also added a custom label to all Pods internally, which looks like: `internal.antrea.io/service-account:[ServiceAccountName]`. And when process ACNP, `serviceAccount` will be translate to a `GroupSelector` with `Namespace` and `PodSelector` to select Pods we need. Signed-off-by: wgrayson <[email protected]>
Fixes antrea-io#2927 This PR added `serviceAccount` field support to ACNP. It uses Namespace and Name to specify a ServiceAccount and all Pods with this ServiceAccount will be selected as workloads. It could be used in egress `to`, ingress `from` and `appliedTo` of both policy and single rule. To implement this feature, this PR also added a custom label to all Pods internally, which looks like: `internal.antrea.io/service-account:[ServiceAccountName]`. And when process ACNP, `serviceAccount` will be translate to a `GroupSelector` with `Namespace` and `PodSelector` to select Pods we need. Signed-off-by: wgrayson <[email protected]>
Describe the problem/challenge you have
Some workloads have same serviceAccount that have same permission to access some resources, for now if we want to setup allow NetworkPolicy for these workloads to access the resources, we need create several NetworkPolicies (or with several group) to match all the workloads, this is a little complicate.
Describe the solution you'd like
We can use serviceAccountSelector to match the workloads, then there will be only one NetworkPolicy created, simplify the operation for NetworkPolicy setup.
The text was updated successfully, but these errors were encountered: