-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[branch/v6] Backport "RFD 19 implementation (#6731)" #6908
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Delete old docs * Move docs to parent folder
Fixes #5708 OSS users loose connection to leaf clusters after upgrade of the root cluster (but not leaf clusters). Teleport 6.0 switches users to ossuser role, this breaks implicit cluster mapping of admin to admin users. The fix downgrades admin role to be less privileged in OSS.
e61e2b2 Backport(v6): Fix cn dialog err handling and disable ace web workers (#234) gravitational/webapps@e61e2b2 [source: -w teleport-v6] [target: -t branch/v6]
…rds compatibility (#5731)
c01b39b Implement OAuth-style state token for AAP auth flow gravitational/webapps@c01b39b [source: -w teleport-v6] [target: -t andrej/v6/security-fixes]
In `auth.Context`, the `Identity` field used to contain the original caller identity and `User` field contained the mapped local user. These are different, if the request comes from a remote trusted cluster. Lots of code assumed that `auth.Context.Identity` contained the local identity and used roles/traits from there. To prevent this confusion, populate `auth.Context.Identity` with the *mapped* identity, and add `auth.Context.UnmappedIdentity` for callers that actually need it. One caller that needs `UnmappedIdentity` is the k8s proxy. It uses that identity to generate an ephemeral user cert. Using the local mapped identity in that case would make the downstream server (e.g. kubernetes_service) to treat it like a real local user, which doesn't exist in the backend and causes trouble. `ProcessKubeCSR` endpoint on the auth server was also updated to understand the unmapped remote identities. Co-authored-by: Andrew Lytvynov <[email protected]>
…#6754) * Create GET db and kube list web handlers (#6672) * Check cloud feature before setting billing access for web (#6537) - Init web handler with auth server feature flags on proxy init - Retrieve auth server features by calling Ping when connecting to auth svc which contains the server feature flags in the response
Delete user k8s, etc. certificates on re-issue Prior to this change, user certificates for services like kubernetes were preserved across a certificate re-issue. This led to issues where elevated privileges granted by an access request were not applied to the service certificates as they were not updated during the reissue process. This patch changes the certificate re-issue process such that: * certificates for services (like Kuberenetes) are not preserved over a certificate re-issue. It is expected that they will be recreated on the first access to the service in question, and * the local keystore files for these certificates services are explicitly deleted so that the now-invalid cached certificates are not returned on keystore queries. See-Also: #5047
) Prior to this change, `tsh` will only ever forward the internal key agent managed by `tsh` to a remote machine. This change allows a user to specify that `tsh` should forward either the `tsh`-internal keystore, or the system key agent at `$SSH_AUTH_SOCK`. This change also brings the `-A` command-line option into line with OpenSSH. For more info refer to RFD-0022. See-Also: #1571
The user loading code is kind of convoluted: it loads all the separate backend items from the `/web/users/` prefix into a struct. That struct is then converted to a full `types.User` object. For each user there are 3 kinds of backend items: - `/params` which is the main one, containing a marshalled `types.User` object - `/pwd` which contains the hashed password for local users - `/mfa/...` which contain registered MFA devices When an SSO user expires, we delete the first two items but not `/mfa/...`. This is intentional, to persist MFA devices across logins. The user loading code would fail because the user was "found" (thanks to MFA items), but didn't have the mandatory `/params` item. This PR ignores any users that don't have `/params` instead of hard-failing all `GetUsers` calls.
* docs: add acme * docs: tweak comments
* client: set TLS certificate usage for k8s/app/db certs --- TLS usage field The certificate usage field prevents a certificate from being used for other purposes. For example, a k8s-specific certificate will not be accepted by a database service endpoint. Server-side enforcement logic was already in place for a long time, but we stopped setting the correct Usage in UserCertRequest during keystore refactoring in 5.0 (with introduction of k8s certs). --- TLS certificate overwrite As part of this, client.ReissueUserCerts will no longer write usage-restricted certificates into the top-level TLS certificate used for Teleport API authentication. For example, when generating a k8s-specific certificate, we used to overwrite both: - `~/.tsh/keys/$proxy/$user-x509.pem` - `~/.tsh/keys/$proxy/$user-kube/$cluster/$kubeCluster-x509.pem` This PR stops overwriting `~/.tsh/keys/$proxy/$user-x509.pem`. This is not a breaking change. --- Selected k8s cluster Prior to this PR, `tsh status` printed the selected k8s cluster based on the top-level TLS certificate. Since we no longer overwrite that certificate, it will not contain a k8s cluster name. Instead, we extract it from the kubeconfig, which is actually more accurate since a user could switch to a different context out-of-band. * Document UserCertRequest CertUsage enum values
Database service doesn't fully support the cert usage restrictions yet so we need an unrestricted cert again.
Addresses issue #4924 If a default Web Proxy port is not specified by the user, either via config or on the command line, `tsh` defaults to `3080`. Unfortunately `3080` is often blocked by firewalls, leading to an unacceptably long timeout for the user. This change adds an RFC8305-like default-port selection algorithm, that will try multiple ports on the supplied host concurrently and select the most reponsive address to use for Web Proxy traffic. I have included the standard HTTPS port (443) in the defaulut set, and this can be easily expanded if other good candidates come along. If the port selection fails for any reason, `tsh` reverts to the legacy behaviour of picking `3080` automatically.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR backports #6731 to v6.