Teleport 5.0.0
Teleport 5.0 is a major release with new features, functionality, and bug fixes. Users can review 5.0 closed issues on Github for details of all items.
New Features
Teleport 5.0 introduces two distinct features: Teleport Application Access and significant Kubernetes Access improvements - multi-cluster support.
Teleport Application Access
Teleport can now be used to provide secure access to web applications. This new feature was built with the express intention of securing internal apps which might have once lived on a VPN or had a simple authorization and authentication mechanism with little to no audit trail. Application Access works with everything from dashboards to single page Javascript applications (SPA).
Application Access uses mutually authenticated reverse tunnels to establish a secure connection with the Teleport unified Access Plane which can then becomes the single ingress point for all traffic to an internal application.
Adding an application follows the same UX as adding SSH servers or Kubernetes clusters, starting with creating a static or dynamic invite token.
$ tctl tokens add --type=app
Then simply start Teleport with a few new flags.
$ teleport start --roles=app --token=xyz --auth-server=proxy.example.com:3080 \
--app-name="example-app" \
--app-uri="http://localhost:8080"
This command will start an app server that proxies the application "example-app" running at http://localhost:8080
at the public address https://example-app.example.com
.
Applications can also be configured using the new app_service
section in teleport.yaml
.
app_service:
# Teleport Application Access is enabled.
enabled: yes
# We've added a default sample app that will check
# that Teleport Application Access is working
# and output JWT tokens.
# https://dumper.teleport.example.com:3080/
debug_app: true
apps:
# Application Access can be used to proxy any HTTP endpoint.
# Note: Name can't include any spaces and should be DNS-compatible A-Za-z0-9-._
- name: "internal-dashboard"
uri: "http://10.0.1.27:8000"
# By default Teleport will make this application
# available on a sub-domain of your Teleport proxy's hostname
# internal-dashboard.teleport.example.com
# - thus the importance of setting up wilcard DNS.
# If you want, it's possible to set up a custom public url.
# DNS records should point to the proxy server.
# internal-dashboard.teleport.example.com
# Example Public URL for the internal-dashboard app.
# public_addr: "internal-dashboard.acme.com"
# Optional labels
# Labels can be combined with RBAC rules to provide access.
labels:
customer: "acme"
env: "production"
# Optional dynamic labels
commands:
- name: "os"
command: ["/usr/bin/uname"]
period: "5s"
# A proxy can support multiple applications. Application Access
# can also be deployed with a Teleport node.
- name: "arris"
uri: "http://localhost:3001"
public_addr: "arris.example.com"
Application access requires two additional changes. DNS must be updated to point the application domain to the proxy and the proxy must be loaded with a TLS certificate for the domain. Wildcard DNS and TLS certificates can be used to simplify deployment.
# When adding the app_service certificates are required to provide a TLS
# connection. The certificates are managed by the proxy_service
proxy_service:
# We've extended support for https certs. Teleport can now load multiple
# TLS certificates. In the below example we've obtained a wildcard cert
# that'll be used for proxying the applications.
# The correct certificate is selected based on the hostname in the HTTPS
# request using SNI.
https_keypairs:
- key_file: /etc/letsencrypt/live/teleport.example.com/privkey.pem
cert_file: /etc/letsencrypt/live/teleport.example.com/fullchain.pem
- key_file: /etc/letsencrypt/live/*.teleport.example.com/privkey.pem
cert_file: /etc/letsencrypt/live/*.teleport.example.com/fullchain.pem
You can learn more at https://goteleport.com/teleport/docs/application-access/
Teleport Kubernetes Access
Teleport 5.0 also introduces two highly requested features for Kubernetes.
- The ability to connect multiple Kubernetes Clusters to the Teleport Access Plane, greatly reducing operational complexity.
- Complete Kubernetes audit log capture #4526, going beyond the existing
kubectl exec
capture.
For a full overview please review the Kubernetes RFD.
To support these changes, we've introduced a new service. This moves Teleport Kubernetes configuration from the proxy_service
into its own dedicated kubernetes_service
section.
When adding the new Kubernetes service, a new type of join token is required.
tctl tokens add --type=kube
Example configuration for the new kubernetes_service
:
# ...
kubernetes_service:
enabled: yes
listen_addr: 0.0.0.0:3027
kubeconfig_file: /secrets/kubeconfig
Note: a Kubernetes port still needs to be configured in the proxy_service
via kube_listen_addr
.
New "tsh kube" commands
tsh kube
commands are used to query registered clusters and switch kubeconfig
context:
$ tsh login --proxy=proxy.example.com --user=awly
# list all registered clusters
$ tsh kube ls
Cluster Name Status
------------- ------
a.k8s.example.com online
b.k8s.example.com online
c.k8s.example.com online
# on login, kubeconfig is pointed at the first cluster (alphabetically)
$ kubectl config current-context
proxy.example.com-a.k8s.example.com
# but all clusters are populated as contexts
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO
* proxy.example.com-a.k8s.example.com proxy.example.com proxy.example.com-a.k8s.example.com
proxy.example.com-b.k8s.example.com proxy.example.com proxy.example.com-b.k8s.example.com
proxy.example.com-c.k8s.example.com proxy.example.com proxy.example.com-c.k8s.example.com
# switch between different clusters:
$ tsh kube login c.k8s.example.com
# the traditional way is also supported:
$ kubectl config use-context proxy.example.com-c.k8s.example.com
# check current cluster
$ kubectl config current-context
proxy.example.com-c.k8s.example.com
Other Kubernetes changes:
- Support k8s clusters behind firewall/NAT using a single Teleport cluster #3667
- Support multiple k8s clusters with a single Teleport proxy instance #3952
Additional User and Token Resource
We've added two new RBAC resources; these provide the ability to limit token creation and to list and modify Teleport users:
- resources: [user]
verbs: [list,create,read,update,delete]
- resources: [token]
verbs: [list,create,read,update,delete]
Learn more about Teleport's RBAC Resources
Cluster Labels
Teleport 5.0 also adds the ability to set labels on Trusted Clusters. The labels are set when creating a trusted cluster invite token. This lets teams use the same RBAC controls used on nodes to approve or deny access to clusters. This can be especially useful for MSPs that connect hundreds of customers' clusters - when combined with Access Workflows, cluster access can easily be delegated. Learn more by reviewing our Truster Cluster Setup & RBAC Docs
Creating a trusted cluster join token for a production environment:
$ tctl tokens add --type=trusted_cluster --labels=env=prod
kind: role
#...
deny:
# cluster labels control what clusters user can connect to. The wildcard ('*')
# means any cluster. By default, deny rules are empty to preserve backwards
# compatibility
cluster_labels:
'env': 'prod'
Teleport UI Updates
Teleport 5.0 also iterates on the UI Refresh from 4.3. We've moved the cluster list into our sidebar and have added an Application launcher. For customers moving from 4.4 to 5.0, you'll notice that we have moved session recordings back to their own dedicated section.
Other updates:
- We now provide local user management via
https://[cluster-url]/web/users
, providing the ability to easily edit, reset and delete local users. - Teleport Node & App Install scripts. This is currently an Enterprise-only feature that provides customers with an easy 'auto-magic' installer script. Enterprise customers can enable this feature by modifying the 'token' resource. See note above.
- We've added a Waiting Room for customers using Access Workflows. Docs
Signed RPM and Releases
Starting with Teleport 5.0, we now provide an RPM repo for stable releases of Teleport. We've also started signing our RPMs to provide assurance that you're always using an official build of Teleport.
See https://rpm.releases.teleport.dev/ for more details.
Improvements
- Added
--format=json
playback option fortsh play
. For exampletsh play --format=json ~/play/0c0b81ed-91a9-4a2a-8d7c-7495891a6ca0.tar | jq '.event
can be used to show all events within an a local archive. #4578 - Added support for continuous backups and auto scaling for DynamoDB. #4780
- Added a Linux ARM64/ARMv8 (64-bit) Release. #3383
- Added
https_keypairs
field which replaceshttps_key_file
andhttps_cert_file
. This allows administrators to load multiple HTTPS certs for Teleport Application Access. Teleport 5.0 is backwards compatible with the old format, but we recommend updating your configuration to usehttps_keypairs
.
Enterprise Only:
tctl
can load credentials from~/.tsh
#4678- Teams can require a user submitted reason when using Access Workflows #4573
Fixes
- Updated
tctl
to always format resources as lists in JSON/YAML. #4281 - Updated
tsh status
to now print Kubernetes status. #4348 - Fixed intermittent issues with
loginuid.so
. #3245 - Reduced
access denied to Proxy
log spam. #2920 - Various AMI fixes: paths are now consistent with other Teleport packages and configuration files will not be overwritten on reboot.
Documentation
We've added an API Reference to simply developing applications against Teleport.
Upgrade Notes
Please follow our standard upgrade procedure.
- Optional: Consider updating
https_key_file
&https_cert_file
to our newhttps_keypairs:
format. - Optional: Consider migrating Kubernetes Access from
proxy_service
tokubernetes_service
after the upgrade.