Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CVE-2023-1260 attempt to create RBAC not currently held in openshift cluster: pods/ephemeralcontainers pods/status #2550

Open
antoinetran opened this issue Feb 28, 2025 · 4 comments
Labels

Comments

@antoinetran
Copy link

What happened?

Deployting vcluster on OpenShift cluster that has https://bugzilla.redhat.com/show_bug.cgi?id=2176267 fixed (meaning restricting some rbac) will let to

warning: Upgrade "my-vcluster" failed: failed to create resource: roles.rbac.authorization.k8s.io "vc-my-vcluster" is forbidden: user "antoinetran" (groups=["2004833" "system:authenticated:oauth" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:
{APIGroups:[""], Resources:["pods/ephemeralcontainers"], Verbs:["patch" "update"]}
{APIGroups:[""], Resources:["pods/status"], Verbs:["patch" "update"]}

What did you expect to happen?

Deployment vcluster OK

How can we reproduce it (as minimally and precisely as possible)?

  1. Deploy an Kubernetes cluster
  2. Create a serviceAccount user, not admin, without RBAC:
{APIGroups:[""], Resources:["pods/ephemeralcontainers"], Verbs:["patch" "update"]}
{APIGroups:[""], Resources:["pods/status"], Verbs:["patch" "update"]}
  1. Deploy vcluster using that serviceAccount

Anything else we need to know?

No response

Host cluster Kubernetes version

$ kubectl version
Client Version: v1.28.15
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2-3580+6216ea1e51a212-dirty

vcluster version

$ vcluster --version
vcluster version 0.23.0

VCluster Config

# My vcluster.yaml / values.yaml here
@antoinetran
Copy link
Author

Suggestion of solution: I can do a pullrequest removing https://github.com/loft-sh/vcluster/blob/v0.23.0/chart/templates/role.yaml#L31 by default:

  - apiGroups: [""]
    resources: ["pods/status", "pods/ephemeralcontainers"]
    verbs: ["patch", "update"]

Workaround:
values.yaml

rbac:
  # Role holds virtual cluster role configuration
  role:
    # Enabled defines if the role should be enabled or disabled.
    enabled: true

# Error:
#         * roles.rbac.authorization.k8s.io "vc-my-vcluster" is forbidden: user "antoinetran" (groups=["2004833" "system:authenticated:oauth" "system:authenticated"]) is attempting to grant RBAC permissions n
# ot currently held:
# {APIGroups:[""], Resources:["pods/ephemeralcontainers"], Verbs:["patch" "update"]}
# {APIGroups:[""], Resources:["pods/status"], Verbs:["patch" "update"]}
#         * roles.rbac.authorization.k8s.io "vc-my-vcluster" not found
    overwriteRules:
      - apiGroups: [""]
        resources: ["configmaps", "secrets", "services", "pods", "pods/attach", "pods/portforward", "pods/exec", "persistentvolumeclaims"]
        verbs: ["create", "delete", "patch", "update", "get", "list", "watch"]
       # See https://bugzilla.redhat.com/show_bug.cgi?id=2176267
       #- apiGroups: [""]
       #  resources: ["pods/status", "pods/ephemeralcontainers"]
       #  verbs: ["patch", "update"]
      - apiGroups: ["apps"]
        resources: ["statefulsets", "replicasets", "deployments"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["endpoints", "events", "pods/log"]
        verbs: ["get", "list", "watch"]
      - apiGroups: ["networking.k8s.io"]
        resources: ["ingresses"]
        verbs: ["create", "delete", "patch", "update", "get", "list", "watch"]

@antoinetran
Copy link
Author

After deploying the cluster and removed the RBAC patch/update as written above, I can now deploy the cluster with my helm components. However I get errors like this in events:

ingress-nginx            9m18s       Warning   SyncError                         pod/my-nginx-jupyter-75b859dcdc-lb6bf                                                   Error syncing: patch host object: update object status: pods "my-nginx-jupyter-75b859dcdc-lb6bf-x-ingress-nginx-x-my-vcluster" is forbidden: User "system:serviceaccount:REDACTED:vc-my-vcluster" cannot update resource "pods/status" in API group "" in the namespace "REDACTED"

It seems the patch function is heavily used by vcluster during its sync function. In previous version v0.20.0-beta1, it did not seem to be the case? Do I have to ask my cluster admin to lower the security by allowing me with these RBAC? Or can we make vcluster work without patch/update?

@antoinetran
Copy link
Author

Now I am asking https://bugzilla.redhat.com/show_bug.cgi?id=2176267 if these privilege can be given to users in fixed version of OpenShift (>=4.11).

@antoinetran
Copy link
Author

After deploying the cluster and removed the RBAC patch/update as written above, I can now deploy the cluster with my helm components. However I get errors like this in events:

ingress-nginx            9m18s       Warning   SyncError                         pod/my-nginx-jupyter-75b859dcdc-lb6bf                                                   Error syncing: patch host object: update object status: pods "my-nginx-jupyter-75b859dcdc-lb6bf-x-ingress-nginx-x-my-vcluster" is forbidden: User "system:serviceaccount:REDACTED:vc-my-vcluster" cannot update resource "pods/status" in API group "" in the namespace "REDACTED"

This error is triggered by https://github.com/loft-sh/vcluster/blob/main/pkg/patcher/apply.go#L253 I read the code but I don't know what is the reason behind this behavior: when a pod is created in vcluster, the pod is also created in host cluster, but first without status, and then with status only, through patch. In fact, I don't know why vcluster need to patch the status in this direction vcluster to host. I think only the direction host to vcluster has a sense for me. Currently, the deployment I did in a cluster without patch/update privilege trigger a few errors each time I create a pod, but it seems this has no consequence because everything is running fine, which is why I believe this behavior of patch status not useful?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant