-
Notifications
You must be signed in to change notification settings - Fork 705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auth loop when no admin permissions set on default namespace #6269
Comments
Hi there @overag3 . I'm not aware of any changes to the authentication of Kubeapps recently, so I'm hesitant to assume anything without further evidence. I'd recommend following the OIDC debugging docs to gather log info or other info that shows why you're authentication is now failing. It can be tricky to find the actual issue, especially as there were upstream issues with ADFS in oauth2proxy (that MS was working to fix) in the past, from memory. Let us know if you can find more details from the logs, otherwise we'll have to wait until we have time to try the setup with ADFS ourselves to confirm the issue, but it may not happen for a while. |
Hi, I can't find anything in Kubeapps pods or kube-apiserver pods logs. |
We have been hitting this same issue. We first noticed the issue on chart https://charts.bitnami.com/bitnami version 12.4.2, During the infinite log in redirect loop an API call to We found two different workarounds:
Allowing the user logging into using OIDC to "get" namespaces seems to fix the issue but is not desirable in our case, because then the user can see many namespaces in the context switch UI and is defaulted to the first one alphabetically. |
Thanks @overag3 and @JoelTrain for the extra information and explicit work-around, sorry @overag3, I'd misunderstood your issue at first. So it sounds like the issue is that
I'll try to dig further, thanks. |
Will add that I am facing the same issue, my source of error could be easily seen from the pinniped-proxy logs as: [INFO] - GET https://<host>/api/v1/namespaces/default 403 Forbidden Similar to @JoelTrain , the only fix was to allow non-admins to GET namespaces on the Some data which may or may not be useful: config.json{
"kubeappsCluster": "default",
"kubeappsNamespace": "apps",
"helmGlobalNamespace": "apps",
"carvelGlobalNamespace": "kapp-controller-packaging-global",
"appVersion": "v2.7.0",
"authProxyEnabled": true,
"oauthLoginURI": "/oauth2/start",
"oauthLogoutURI": "/oauth2/sign_out",
"authProxySkipLoginPage": false,
"featureFlags": {
"apiOnly": {
"enabled": false,
"grpc": {
"annotations": {
"nginx.ingress.kubernetes.io/backend-protocol": "GRPC"
}
}
},
"operators": false,
"schemaEditor": {
"enabled": false
}
},
"clusters": [
"default"
],
"theme": "",
"remoteComponentsUrl": "",
"customAppViews": [],
"skipAvailablePackageDetails": false,
"createNamespaceLabels": {}
} |
Signed-off-by: Michael Nelson <[email protected]>
Investigated and fixed in #6269. Once found, both the problem and solution was obvious (it was introduced when we switched backend gRPC libraries), though I still need to investigate why our CI integration tests did not hit this (the intention is that they should, but we could be giving our un-privileged user more privileges than we think). I'll ping here again when the fix is landed and a development build of the kubeapps-apis image is ready for people to verify. Thanks for your patience :) |
@absoludity Thank you very much for fixing this bug !! 👍 Regards |
…lient (#6319) ### Description of the change When we switched to connect-grpc, we continued sending standard gRPC error codes from Kubeapps-APIs, and without more information, the [connect-gRPC machinery was converting these to code unknown](https://connect.build/docs/go/errors/#working-with-errors). When we check that a user has authenticated successfully by getting the default namespace after the login dance, either an OK or a forbidden means the request was authenticated, unauthenticated means it was not. But instead the error was coming back with an unknown code and so Kubeapps was not recognising this as authenticated. This was affecting logins for users who do not have permission to get the default namespace (see #6269 for the workaround). I reproduced this locally (though with token auth) and so before this change, the gRPC request to CheckNamespaceExists came bace with what *looks* like a permission denied response - but the [grpc-status code is 2 - which is unknown](https://grpc.github.io/grpc/core/md_doc_statuscodes.html): ``` grpc-message | rpc error: code = PermissionDenied desc = Forbidden to get the Namespace 'default' due to 'namespaces "default" is forbidden: User "system:serviceaccount:kubeapps:kubeapps-view" cannot get resource "namespaces" in API group "" in the namespace "default"' grpc-status | 2 <----- this is "Unknown" ``` After this change, the error arrives as a PermissionDenied as expected: ``` grpc-message | Forbidden to get the Namespace 'default' due to 'namespaces "default" is forbidden: User "system:serviceaccount:kubeapps:kubeapps-view" cannot get resource "namespaces" in API group "" in the namespace "default"' grpc-status | 7 <----- this is the "PermissionDenied" code ``` which Kubeapps happily recognises and assumes the user is now authenticated. ### Benefits People can login without requiring RBAC to get the default namespace. ### Possible drawbacks None ### Applicable issues <!-- Enter any applicable Issues here (You can reference an issue using #) --> - fixes #6269 ### Additional information Although I was glad to find the fix here, I'm still uncertain why our CI integration tests did not pick this up, given that the intention is that we check login with an unprivileged user. I need to investigate that further. Also, this is a second attempt at fixing this after #6312 ended up touching too many call-sites. I'll then followup switching other call-sites and removing the old code. Signed-off-by: Michael Nelson <[email protected]>
### Description of the change Follows on from #6319, cleaning up the resources plugin so that it uses the connect error. ### Applicable issues - ref #6269 ### Additional information PRs will follow for the helm and flux plugins, before removing the original status error code. Signed-off-by: Michael Nelson <[email protected]>
### Description of the change Continuing switch from status.Error to connect.Error. See #6269 for more info. ### Applicable issues - ref #6269 --------- Signed-off-by: Michael Nelson <[email protected]>
### Description of the change This is the rest of the updates removing the old status errors from the Helm plugin. ### Applicable issues - ref #6269 Signed-off-by: Michael Nelson <[email protected]>
### Description of the change Doesn't complete but does most of the kapp-controller switch to the connect error with tests passing. ### Applicable issues - fixes #6269 --------- Signed-off-by: Michael Nelson <[email protected]>
### Description of the change Finishes the switch of the kapp-controller plugin over to use the connect errors. Next up, flux. ### Applicable issues - ref #6269 Signed-off-by: Michael Nelson <[email protected]>
### Description of the change This is the final PR removing all the existing grpc status codes from the kubeapps-apis service. ### Benefits Dashboard client interprets the correct errors. ### Applicable issues - fixes #6269 --------- Signed-off-by: Michael Nelson <[email protected]>
Hi,
to resolve issue #6223 , we updated Kubeapps from version 12.2.10 to version 12.4.2.
We use ADFS to authenticate on Kubeapps and unfortunately after the update, we were unable to access the portal listing the applications already deployed. We systematically return to the page which asks us to connect via OIDC.
We found a workaround by creating a RoleBinding, we tried to use the cluster role reader but without success
I specify we confine users to prevent them from deploying applications elsewhere than in their namespace. We find that this behavior will generate security issues and lead to misunderstandings on the user side.
Is there a solution to solve this problem appeared with the last versions?
Regards
The text was updated successfully, but these errors were encountered: