Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Teleport 6.2 Test Plan #6651

Closed
russjones opened this issue Apr 29, 2021 · 22 comments
Closed

Teleport 6.2 Test Plan #6651

russjones opened this issue Apr 29, 2021 · 22 comments
Assignees
Labels
test-plan A list of tasks required to ship a successful product release.
Milestone

Comments

@russjones
Copy link
Contributor

russjones commented Apr 29, 2021

Manual Testing Plan

Below are the items that should be manually tested with each release of Teleport.
These tests should be run on both a fresh install of the version to be released
as well as an upgrade of the previous version of Teleport.

  • Adding nodes to a cluster @webvictim @tcsc

    • Adding Nodes via Valid Static Token
    • Adding Nodes via Valid Short-lived Tokens
    • Adding Nodes via Invalid Token Fails
    • Adding Nodes via Expired Token Fails
    • Adding Nodes with No Token Fails
    • Adding Nodes with Invalid Roles Fails
    • Revoking Node Invitation
  • Trusted Clusters @nklaassen @awly

    • Adding Trusted Cluster Valid Static Token
    • Adding Trusted Cluster Valid Short-lived Token
    • Adding Trusted Cluster Invalid Token
    • Removing Trusted Cluster
  • RBAC @Joerger @andrejtokarcik

    Make sure that invalid and valid attempts are reflected in audit log.

    • Successfully connect to node with correct role
    • Unsuccessfully connect to a node in a role restricting access by label
    • Unsuccessfully connect to a node in a role restricting access by invalid SSH login
    • Allow/deny role option: SSH agent forwarding
    • Allow/deny role option: Port forwarding
  • Users @fspmarshall @quinqu
    With every user combination, try to login and signup with invalid second factor, invalid password to see how the system reacts.

    • Adding Users Password Only
    • Adding Users OTP
    • Adding Users U2F
    • Managing MFA devices
      • Add an OTP device with tsh mfa add
      • Add a U2F device with tsh mfa add
      • List MFA devices with tsh mfa ls
      • Remove an OTP device with tsh mfa rm
      • Remove a U2F device with tsh mfa rm
      • Attempt removing the last MFA device on the user
        • with second_factor: on in auth_service, should fail
        • with second_factor: optional in auth_service, should succeed
    • Login Password Only
    • Login with MFA
      • Add 2 OTP and 2 U2F devices with tsh mfa add
      • Login via OTP
      • Login via U2F
    • Login OIDC
    • Login SAML
    • Login GitHub
    • Deleting Users
  • Audit Log @r0mant @xacrimon

    • Failed login attempts are recorded
    • Interactive sessions have the correct Server ID
      • Server ID is the ID of the node in regular mode
      • Server ID is randomly generated for proxy node
    • Exec commands are recorded
    • scp commands are recorded
    • Subsystem results are recorded
  • Interact with a cluster using tsh @webvictim @tcsc

    These commands should ideally be tested for recording and non-recording modes as they are implemented in a different ways.

    • tsh ssh <regular-node>
    • tsh ssh <node-remote-cluster>
    • tsh ssh -A <regular-node>
    • tsh ssh -A <node-remote-cluster>
    • tsh ssh <regular-node> ls
    • tsh ssh <node-remote-cluster> ls
    • tsh join <regular-node>
    • tsh join <node-remote-cluster>
    • tsh play <regular-node>
    • tsh play <node-remote-cluster>
    • tsh scp <regular-node>
    • tsh scp <node-remote-cluster>
    • tsh ssh -L <regular-node>
    • tsh ssh -L <node-remote-cluster>
    • tsh ls
    • tsh clusters
  • Interact with a cluster using ssh @nklaassen @awly
    Make sure to test both recording and regular proxy modes.

    • ssh <regular-node>
    • ssh <node-remote-cluster>
    • ssh -A <regular-node>
    • ssh -A <node-remote-cluster>
    • ssh <regular-node> ls
    • ssh <node-remote-cluster> ls
    • scp <regular-node>
    • scp <node-remote-cluster>
    • ssh -L <regular-node>
    • ssh -L <node-remote-cluster>
  • Interact with a cluster using the Web UI @Joerger @andrejtokarcik

    • Connect to a Teleport node
    • Connect to a OpenSSH node
    • Check agent forwarding is correct based on role and proxy mode.

Combinations @fspmarshall @quinqu

For some manual testing, many combinations need to be tested. For example, for
interactive sessions the 12 combinations are below.

  • Connect to a OpenSSH node in a local cluster using OpenSSH.
  • Connect to a OpenSSH node in a local cluster using Teleport.
  • Connect to a OpenSSH node in a local cluster using the Web UI.
  • Connect to a Teleport node in a local cluster using OpenSSH.
  • Connect to a Teleport node in a local cluster using Teleport.
  • Connect to a Teleport node in a local cluster using the Web UI.
  • Connect to a OpenSSH node in a remote cluster using OpenSSH.
  • Connect to a OpenSSH node in a remote cluster using Teleport.
  • Connect to a OpenSSH node in a remote cluster using the Web UI.
  • Connect to a Teleport node in a remote cluster using OpenSSH.
  • Connect to a Teleport node in a remote cluster using Teleport.
  • Connect to a Teleport node in a remote cluster using the Web UI.

Teleport with multiple Kubernetes clusters @xacrimon @webvictim

Note: you can use GKE or EKS or minikube to run Kubernetes clusters.
Minikube is the only caveat - it's not reachable publicly so don't run a proxy there.

  • Deploy combo auth/proxy/kubernetes_service outside of a Kubernetes cluster, using a kubeconfig
    • Login with tsh login, check that tsh kube ls has your cluster
    • Run kubectl get nodes, kubectl exec -it $SOME_POD -- sh
    • Verify that the audit log recorded the above request and session
  • Deploy combo auth/proxy/kubernetes_service inside of a Kubernetes cluster
    • Login with tsh login, check that tsh kube ls has your cluster
    • Run kubectl get nodes, kubectl exec -it $SOME_POD -- sh
    • Verify that the audit log recorded the above request and session
  • Deploy combo auth/proxy_service outside of the Kubernetes cluster and kubernetes_service inside of a Kubernetes cluster, connected over a reverse tunnel
    • Login with tsh login, check that tsh kube ls has your cluster
    • Run kubectl get nodes, kubectl exec -it $SOME_POD -- sh
    • Verify that the audit log recorded the above request and session
  • Deploy a second kubernetes_service inside of another Kubernetes cluster, connected over a reverse tunnel
    • Login with tsh login, check that tsh kube ls has both clusters
    • Switch to a second cluster using tsh kube login
    • Run kubectl get nodes, kubectl exec -it $SOME_POD -- sh on the new cluster
    • Verify that the audit log recorded the above request and session
  • Deploy combo auth/proxy/kubernetes_service outside of a Kubernetes cluster, using a kubeconfig with multiple clusters in it
    • Login with tsh login, check that tsh kube ls has all clusters
  • Test Kubernetes screen in the web UI (tab is located on left side nav on dashboard):
    • Verify that all kubes registered are shown with correct name and labels
    • Verify that clicking on a rows connect button renders a dialogue on manual instructions with Step 2 login value matching the rows name column
    • Verify searching for name or labels in the search bar works
    • Verify you can sort by name colum

Helm charts

  • Deploy teleport-cluster Helm chart to an EKS cluster in HA mode by following the AWS guide
    • Verify that web UI works with no TLS warnings and you can create a user with tctl users add
    • Log in with tsh login
    • Display Kubernetes clusters with tsh kube ls, log in with tsh kube login
    • Run kubectl get nodes and kubectl -n kube-system get pods
  • Deploy teleport-cluster Helm chart to a GKE cluster in HA mode by following the GKE guide
    • Verify that web UI works with no TLS warnings and you can create a user with tctl users add
    • Log in with tsh login
    • Display Kubernetes clusters with tsh kube ls, log in with tsh kube login
    • Run kubectl get nodes and kubectl -n kube-system get pods
  • Deploy teleport-kube-agent Helm chart to an EKS cluster following instructions in the README
    • Verify that the remote Kubernetes cluster appears in tsh kube ls, log in with tsh kube login
    • Run kubectl get nodes and kubectl get pods, verify no errors
  • Deploy teleport-kube-agent Helm chart to a GKE cluster following instructions in the README
    • Verify that the remote Kubernetes cluster appears in tsh kube ls, log in with tsh kube login
    • Run kubectl get nodes and kubectl get pods, verify no errors

Migrations @tcsc @nklaassen

  • Migrate trusted clusters from 6.1.0 to 6.2.0
    • Migrate auth server on main cluster, then rest of the servers on main cluster
      SSH should work for both main and old clusters
    • Migrate auth server on remote cluster, then rest of the remote cluster
      SSH should work

Command Templates

When interacting with a cluster, the following command templates are useful:

OpenSSH

# when connecting to the recording proxy, `-o 'ForwardAgent yes'` is required.
ssh -o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %[email protected] -s proxy:%h:%p" \
  node.example.com

# the above command only forwards the agent to the proxy, to forward the agent
# to the target node, `-o 'ForwardAgent yes'` needs to be passed twice.
ssh -o "ForwardAgent yes" \
  -o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %[email protected] -s proxy:%h:%p" \
  node.example.com

# when connecting to a remote cluster using OpenSSH, the subsystem request is
# updated with the name of the remote cluster.
ssh -o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %[email protected] -s proxy:%h:%[email protected]" \
  node.foo.com

Teleport

# when connecting to a OpenSSH node, remember `-p 22` needs to be passed.
tsh --proxy=proxy.example.com --user=<username> --insecure ssh -p 22 node.example.com

# an agent can be forwarded to the target node with `-A`
tsh --proxy=proxy.example.com --user=<username> --insecure ssh -A -p 22 node.example.com

# the --cluster flag is used to connect to a node in a remote cluster.
tsh --proxy=proxy.example.com --user=<username> --insecure ssh --cluster=foo.com -p 22 node.foo.com

Teleport Plugins @awly @Joerger

  • Test receiving a message via Teleport Slackbot
  • Test receiving a new Jira Ticket via Teleport Jira

WEB UI @kimlisa @alex-kovoy

Main

For main, test with admin role that has access to all resources.

Top Nav

  • Verify that cluster selector displays all (root + leaf) clusters
  • Verify that user name is displayed
  • Verify that user menu shows logout, help&support, and account settings (for local users)

Side Nav

  • Verify that each item has an icon
  • Verify that Collapse/Expand works and collapsed has icon >, and expand has icon v
  • Verify that it automatically expands and highlights the item on page refresh

Servers aka Nodes

  • Verify that "Servers" table shows all joined nodes
  • Verify that "Connect" button shows a list of available logins
  • Verify that "Hostname", "Address" and "Labels" columns show the current values
  • Verify that "Search" by hostname, address, labels works
  • Verify that terminal opens when clicking on one of the available logins
  • Verify that clicking on Add Server button renders dialogue set to Automatically view
    • Verify clicking on Regenerate Script regenerates token value in the bash command
    • Verify using the bash command successfully adds the server (refresh server list)
    • Verify that clicking on Manually tab renders manual steps
    • Verify that clicking back to Automatically tab renders bash command

Applications

  • Verify that clicking on Add Application button renders dialogue
    • Verify input validation (prevent empty value and invalid url)
    • Verify after input and clicking on Generate Script, bash command is rendered
    • Verify clicking on Regenerate button regenerates token value in bash command

Databases

  • Verify that clicking on Add Database button renders dialogue for manual instructions:
    • Verify selecting different options on Step 4 changes Step 5 commands

Active Sessions

  • Verify that "empty" state is handled
  • Verify that it displays the session when session is active
  • Verify that "Description", "Session ID", "Users", "Nodes" and "Duration" columns show correct values
  • Verify that "OPTIONS" button allows to join a session

Audit log

  • Verify that time range button is shown and works
  • Verify that clicking on Session Ended event icon, takes user to session player
  • Verify event detail dialogue renders when clicking on events details button
  • Verify searching by type, description, created works

Users

  • Verify that users are shown
  • Verify that creating a new user works
  • Verify that editing user roles works
  • Verify that removing a user works
  • Verify resetting a user's password works
  • Verify search by username, roles, and type works

Auth Connectors

  • Verify that creating OIDC/SAML/GITHUB connectors works
  • Verify that editing OIDC/SAML/GITHUB connectors works
  • Verify that error is shown when saving an invalid YAML
  • Verify that correct hint text is shown on the right side
  • Verify that encrypted SAML assertions work with an identity provider that supports it (Azure).

Auth Connectors Card Icons

  • Verify that GITHUB card has github icon
  • Verify that SAML card has SAML icon
  • Verify that OIDC card has OIDC icon
  • Verify when there are no connectors, empty state renders

Roles

  • Verify that roles are shown
  • Verify that "Create New Role" dialog works
  • Verify that deleting and editing works
  • Verify that error is shown when saving an invalid YAML
  • Verify that correct hint text is shown on the right side

Managed Clusters

  • Verify that it displays a list of clusters (root + leaf)
  • Verify that every menu item works: nodes, apps, audit events, session recordings.

Help & Support

  • Verify that all URLs work and correct (no 404)

Access Requests

Creating Access Rquests

  1. Create a role with limited permissions (defined below as allow-roles). This role allows you to see the Role screen and ssh into all nodes.
  2. Create another role with limited permissions (defined below as allow-users). This role session expires in 4 minutes, allows you to see Users screen, and denies access to all nodes.
  3. Create another role with no permissions other than being able to create requests (defined below as default)
  4. Create a user with role default assigned
  5. Create a few requests under this user to test pending/approved/denied state.
kind: role
metadata:
  name: allow-roles
spec:
  allow:
    logins:
    - root
    node_labels:
      '*': '*'
    rules:
    - resources:
      - role
      verbs:
      - list
      - read
  options:
    max_session_ttl: 8h0m0s
version: v3
kind: role
metadata:
  name: allow-users
spec:
  allow:
    rules:
    - resources:
      - user
      verbs:
      - list
      - read
  deny:
    node_labels:
      '*': '*'
  options:
    max_session_ttl: 4m0s
version: v3
kind: role
metadata:
  name: default
spec:
  allow:
    request:
      roles:
      - allow-roles
      - allow-users
      suggested_reviewers:
      - random-user-1
      - random-user-2
  options:
    max_session_ttl: 8h0m0s
version: v3
  • Verify that creating a new request works
  • Verify that under requestable roles, only allow-roles and allow-users are listed
  • Verify input validation requires at least one role to be selected
  • Verify you can select/input/modify reviewers
  • Verify after creating, requests are listed in pending states
  • Verify you can't review own requests

Viewing & Approving/Denying Requests

Create a user with the role reviewer that allows you to review all requests, and delete them.

kind: role
version: v3
metadata:
  name: reviewer
spec:
  allow:
    review_requests:
      roles: ['*']
  • Verify you can view access request from request list
  • Verify there is list of reviewers you selected (empty list if none selected AND none wasn't defined in roles)
  • Verify threshold name is there (it will be default if thresholds weren't defined in role, or blank if not named)
  • Verify you can approve a request with message, and immediately see updated state with your review stamp (green checkmark) and message box
  • Verify you can deny a request, and immediately see updated state with your review stamp (red cross)
  • Verify deleting the denied request is removed from list

Assuming Approved Requests

  • Verify assume buttons are only present for approved request and for logged in user
  • Verify that assuming allow-roles allows you to see roles screen and ssh into nodes
  • Verify that after clicking on the assume button, it is disabled in both the list and in viewing
  • After assuming allow-roles, verify that assuming allow-users allows you to see users screen, and denies access to nodes
    • Verify a switchback banner is rendered with roles assumed, and count down of when it expires
    • Verify switching back goes back to your default static role
    • Verify after re-assuming this role, the user is automatically logged out after the expiry is met (4 minutes)
  • Verify that after logging out (or getting logged out automatically) and relogging in, permissions are reset to default, and requests that are not expired and are approved are assumable again

Access Request Waiting Room

Strategy Reason

Create the following role:

kind: role
metadata:
  name: restrict
spec:
  allow:
    request:
      roles:
      - <some other role to assign user after approval>
  options:
    max_session_ttl: 8h0m0s
    request_access: reason
    request_prompt: <some custom prompt to show in reason dialogue>
version: v3
  • Verify after login, reason dialogue is rendered with prompt set to request_prompt setting
  • Verify after clicking send request, pending dialogue renders
  • Verify after approving a request, dashboard is rendered
  • Verify the correct role was assigned

Strategy Always

With the previous role you created from Strategy Reason, change request_access to always:

  • Verify after login, pending dialogue is rendered
  • Verify after approving a request, dashboard is rendered
  • Verify after denying a request, access denied dialogue is rendered

Strategy Optional

With the previous role you created from Strategy Reason, change request_access to optional:

  • Verify after login, dashboard is rendered
  • Verify a switchback banner is rendered with roles assumed, and count down of when it expires
    • Verify switchback button says Switch Back and clicking goes back to the login screen

Account

  • Verify that Account screen is accessibly from the user menu for local users.
  • Verify that changing a local password works (OTP, U2F)

Terminal

  • Verify that top nav has a user menu (Main and Logout)
  • Verify that switching between tabs works on alt+[1...9]

Node List Tab

  • Verify that Cluster selector works (URL should change too)
  • Verify that Quick launcher input works
  • Verify that Quick launcher input handles input errors
  • Verify that "Connect" button shows a list of available logins
  • Verify that "Hostname", "Address" and "Labels" columns show the current values
  • Verify that "Search" by hostname, address, labels work
  • Verify that new tab is created when starting a session

Session Tab

  • Verify that session and browser tabs both show the title with login and node name
  • Verify that terminal resize works
    • Install midnight commander on the node you ssh into: $ sudo apt-get install mc
    • Run the program: $ mc
    • Resize the terminal to see if panels resize with it
  • Verify that session tab shows/updates number of participants when a new user joins the session
  • Verify that tab automatically closes on "$ exit" command
  • Verify that SCP Upload works
  • Verify that SCP Upload handles invalid paths and network errors
  • Verify that SCP Download works
  • Verify that SCP Download handles invalid paths and network errors

Session Player

  • Verify that it can replay a session
  • Verify that when playing, scroller auto scrolls to bottom most content
  • Verify when resizing player to a small screen, scroller appears and is working
  • Verify that error message is displayed (enter a invalid SID in the URL)

Invite Form

  • Verify that input validates
  • Verify that invite works with 2FA disabled
  • Verify that invite works with OTP enabled
  • Verify that invite works with U2F enabled
  • Verify that error message is shown if an invite is expired/invalid

Login Form

  • Verify that input validates
  • Verify that login works with 2FA disabled
  • Verify that login works with OTP enabled
  • Verify that login works with U2F enabled
  • Verify that login works for Github/SAML/OIDC
  • Verify that account is locked after several unsuccessful attempts
  • Verify that redirect to original URL works after successful login

Multi-factor Authentication (mfa)

Create/modify teleport.yaml and set the following authentication settings under auth_service

authentication:
  type: local
  second_factor: optional
  require_session_mfa: yes
  u2f:
    app_id: https://example.com:443
    facets:
    - https://example.com:443
    - https://example.com
    - example.com:443
    - example.com

MFA create, login, password reset

  • Verify when creating a user, and setting password, required 2nd factor is totp (TODO: temporary hack, ideally want to allow user to select)
  • Verify at login page, there is a mfa dropdown menu (none, u2f, otp), and can login with otp
  • Verify at reset password page, there is the same dropdown to select your mfa, and can reset with otp

MFA require auth

Through the CLI, tsh login and register a u2f key with tsh mfa add (not supported in UI yet).

Using the same user as above:

  • Verify logging in with registered u2f key works
  • Verify connecting to a ssh node prompts you to tap your registered u2f key

RBAC

Create a role, with no allow.rules defined:

kind: role
metadata:
  name: test
spec:
  allow:
    app_labels:
      '*': '*'
    logins:
    - root
    node_labels:
      '*': '*'
  options:
    max_session_ttl: 8h0m0s
version: v3
  • Verify that a user has access only to: "Servers", "Applications", "Databases", "Kubernetes", "Active Sessions", "Access Requests" and "Manage Clusters"
  • Verify there is no Add Server button in Server view
  • Verify there is no Add Application button in Applications view
  • Verify only Nodes and Apps are listed under options button in Manage Clusters

Note: User has read/create access_request access to their own requests, despite resource settings

Add the following under spec.allow.rules to enable read access to the audit log:

  - resources:
      - event
      verbs:
      - list
  • Verify that the Audit Log and Session Recordings is accessible
  • Verify that playing a recorded session is denied

Add the following to enable read access to recorded sessions

  - resources:
      - session
      verbs:
      - read
  • Verify that a user can re-play a session (session.end)

Add the following to enable read access to the roles

- resources:
      - role
      verbs:
      - list
      - read
  • Verify that a user can see the roles
  • Verify that a user cannot reset password and create/delete/update a role

Add the following to enable read access to the auth connectors

- resources:
      - auth_connector
      verbs:
      - list
      - read
  • Verify that a user can see the list of auth connectors.
  • Verify that a user cannot create/delete/update the connectors

Add the following to enable read access to users

  - resources:
      - user
      verbs:
      - list
      - read
  • Verify that a user can access the "Users" screen
  • Verify that a user cannot create/delete/update a user

Add the following to enable read access to trusted clusters

  - resources:
      - trusted_cluster
      verbs:
      - list
      - read
  • Verify that a user can access the "Trust" screen
  • Verify that a user cannot create/delete/update a trusted cluster.

Performance/Soak Test @xacrimon @fspmarshall

Using tsh bench tool, perform the soak tests and benchmark tests on the following configurations:

  • Cluster with 10K nodes in normal (non-IOT) node mode with ETCD

  • Cluster with 10K nodes in normal (non-IOT) mode with DynamoDB

  • Cluster with 1K IOT nodes with ETCD

  • Cluster with 1K IOT nodes with DynamoDB

  • Cluster with 500 trusted clusters with ETCD

  • Cluster with 500 trusted clusters with DynamoDB

Soak Tests

Run 4hour soak test with a mix of interactive/non-interactive sessions:

tsh bench --duration=4h user@teleport-monster-6757d7b487-x226b ls
tsh bench -i --duration=4h user@teleport-monster-6757d7b487-x226b ps uax

Observe prometheus metrics for goroutines, open files, RAM, CPU, Timers and make sure there are no leaks

  • Verify that prometheus metrics are accurate.

Breaking load tests

Load system with tsh bench to the capacity and publish maximum numbers of concurrent sessions with interactive
and non interactive tsh bench loads.

Application Access @r0mant @smallinsky

  • Run an application within local cluster.
    • Verify the debug application debug_app: true works.
    • Verify an application can be configured with command line flags.
    • Verify an application can be configured from file configuration.
    • Verify that applications are available at auto-generated addresses name.rootProxyPublicAddr and well as publicAddr.
  • Run an application within a trusted cluster.
    • Verify that applications are available at auto-generated addresses name.rootProxyPublicAddr.
  • Verify Audit Records.
    • app.session.start and app.session.chunk events are created in the Audit Log.
    • app.session.chunk points to a 5 minute session archive with multiple app.session.request events inside.
    • tsh play <chunk-id> can fetch and print a session chunk archive.
  • Verify JWT using verify-jwt.go.
  • Verify RBAC.
  • Verify CLI access with tsh app login.
  • Test Applications screen in the web UI (tab is located on left side nav on dashboard):
    • Verify that all apps registered are shown
    • Verify that clicking on the app icon takes you to another tab
    • Verify using the bash command produced from Add Application dialogue works (refresh app screen to see it registered)

Database Access @r0mant @smallinsky

  • Connect to a database within a local cluster.
    • Self-hosted Postgres.
    • Self-hosted MySQL.
    • AWS Aurora Postgres.
    • AWS Aurora MySQL.
    • AWS Redshift.
    • GCP Cloud SQL Postgres.
  • Connect to a database within a remote cluster via a trusted cluster.
    • Self-hosted Postgres.
    • Self-hosted MySQL.
    • AWS Aurora Postgres.
    • AWS Aurora MySQL.
    • AWS Redshift.
    • GCP Cloud SQL Postgres.
  • Verify audit events.
    • db.session.start is emitted when you connect.
    • db.session.end is emitted when you disconnect.
    • db.session.query is emitted when you execute a SQL query.
  • Verify RBAC.
    • tsh db ls shows only databases matching role's db_labels.
    • Can only connect as users from db_users.
    • (Postgres only) Can only connect to databases from db_names.
    • db.session.start is emitted when connection attempt is denied.
  • Test Databases screen in the web UI (tab is located on left side nav on dashboard):
    • Verify that all dbs registered are shown with correct name, description, type, and labels
    • Verify that clicking on a rows connect button renders a dialogue on manual instructions with Step 2 login value matching the rows name column
    • Verify searching for all columns in the search bar works
    • Verify you can sort by all columns except labels
@russjones russjones added the test-plan A list of tasks required to ship a successful product release. label Apr 29, 2021
@russjones russjones added this to the 6.2 "Buffalo" milestone Apr 29, 2021
@awly awly mentioned this issue May 4, 2021
19 tasks
@quinqu
Copy link
Contributor

quinqu commented May 14, 2021

When adding an OTP device with tsh mfa add and try to enter the code, teleport says the code must be 6 digits long and my input surely is. It still wont be accepted. Terminal output:

Choose device type [TOTP, U2F]: TOTP
Enter device name: tempdevice
Enter an OTP code from a *registered* device: 628304


Open your TOTP app and create a new manual entry with these fields:
  URL: <omitted> 
  Account name: <omitted>
  Secret key: <omitted>
  Issuer: <omitted> 
  Algorithm: SHA1
  Number of digits: 6
  Period: 30s

Once created, enter an OTP code generated by the app: 624072
TOTP code must be exactly 6 digits long, try again
Once created, enter an OTP code generated by the app: 624072
TOTP code must be exactly 6 digits long, try again
Once created, enter an OTP code generated by the app: 910046
TOTP code must be exactly 6 digits long, try again
Once created, enter an OTP code generated by the app: 426970
TOTP code must be exactly 6 digits long, try again
Once created, enter an OTP code generated by the app:

@awly
Copy link
Contributor

awly commented May 17, 2021

@quinqu could you please file a bug for this and assign to me?
It's likely I introduced the problem in 6.2

@Joerger
Copy link
Contributor

Joerger commented May 19, 2021

Updating a user with tctl create -f user.yaml breaks the audit log and session recordings tabs in the Web UI - #6935

@tcsc
Copy link
Contributor

tcsc commented May 19, 2021

@webvictim - I've added a test matrix for the tsh tests here so we don't stomp on each other. Or on ourselves. Feel free to edit as necessary.

New New (No Rec) Upgraded Upgraded (No Rec)
PASS PASS PASS PASS tsh ssh <regular-node>
PASS PASS PASS PASS tsh ssh <node-remote-cluster>
PASS PASS PASS PASS tsh ssh -A <regular-node>
PASS PASS PASS PASS tsh ssh -A <node-remote-cluster>
PASS PASS PASS PASS tsh ssh <regular-node> ls
PASS PASS PASS PASS tsh ssh <node-remote-cluster> ls
PASS PASS PASS PASS tsh join <regular-node>
PASS PASS PASS PASS tsh join <node-remote-cluster>
PASS *PASS PASS *PASS tsh play <regular-node>
PASS *PASS PASS *PASS tsh play <node-remote-cluster>
PASS PASS PASS PASS tsh scp <regular-node>
PASS PASS PASS PASS tsh scp <node-remote-cluster>
PASS PASS PASS PASS tsh ssh -L <regular-node>
PASS PASS PASS PASS tsh ssh -L <node-remote-cluster>
PASS PASS PASS PASS tsh ls
PASS PASS PASS PASS tsh clusters
  • = failed with ERROR: 0 not found, which I assume is the correct behaviour when recording is disabled

@tcsc
Copy link
Contributor

tcsc commented May 19, 2021

Encountered #6938 while testing: Panic when using tctl with remote auth server

@kimlisa
Copy link
Contributor

kimlisa commented May 19, 2021

mfa related bug, where scp upload/download does not work in the web ui: #6939

@r0mant
Copy link
Collaborator

r0mant commented May 19, 2021

@Joerger @xacrimon Seeing #6935 as well which Brian reported above.

Screen Shot 2021-05-19 at 12 06 17 PM

@xacrimon Looks like this file (dynamic.go) was a part of your RFD19 implementation, could this have caused it? Just need to add user.updated event to the switch probably.

@xacrimon
Copy link
Contributor

@r0mant Resolved in #6949 and #6950 backport to v6.

@fspmarshall
Copy link
Contributor

Changes introduced in #6731 break compatibility with older 6.X instances due to reliance on new GRPC methods (e.g. attempting to view audit events from UI of a 6.2 proxy results in unknown method GetEvents for service proto.AuthService error when dealing with a 6.1 auth server).

Teleport should fallback to using old event API if new one is not available.

cc: @xacrimon @kimlisa

@xacrimon
Copy link
Contributor

xacrimon commented May 19, 2021

@fspmarshall So this is a bit of an issue. The old events API does not support pagination but the IAuditLog interface expects it. Should we just ignore the new parameters introduced in RFD 19 and pretend pagination doesn't exist on fallback?

@kimlisa
Copy link
Contributor

kimlisa commented May 20, 2021

ui switchback bug (i am fixing): #6960
@xacrimon related to #6935, unknown event bug: #6959

@fspmarshall
Copy link
Contributor

Should we just ignore the new parameters introduced in RFD 19 and pretend pagination doesn't exist on fallback?

@xacrimon Followed up in PR. Basically, I think we should pretend it doesn't exist when dealing with the first call (since that means we're getting the "first page", which is what the old API did), but we should return an error if startKey != "", since that means we're loading a subsequent page, which the old API can't do.

@awly
Copy link
Contributor

awly commented May 20, 2021

@xacrimon @webvictim @fspmarshall @quinqu let me know if you're overloaded.
Some other folks are done with their testing so I could re-distribute remaining tasks if needed.

@quinqu
Copy link
Contributor

quinqu commented May 20, 2021

@awly i could use some help on the U2F second factor tests as i do not have a U2F device.

@awly
Copy link
Contributor

awly commented May 20, 2021

@quinqu will do 👍

@awly
Copy link
Contributor

awly commented May 20, 2021

FYI everyone, if you find an issue while testing, please file a bug and put it into 6.2 milestone.
That way I can track all the remaining work and questions.

@xacrimon
Copy link
Contributor

xacrimon commented May 21, 2021

I have previously assumed DynamoDB tests were running but they have not been. I still need to hook these up and run them before I can say everything is correct. I will make another comment but please do not cut before I confirm that everything is indeed working @awly. @russjones I've also merged the API compat PR. #6990 will need to be merged as well, I will ping for reviews when it is ready.

@webvictim
Copy link
Contributor

Ran into some weird tsh logout behaviour, detailed in #6992

Not sure if this is a blocker but I can't log out of all my clusters for some reason.

@xacrimon
Copy link
Contributor

xacrimon commented May 21, 2021

Okay. I have pinged reviews on #6990 and I sign off on everything working when it is merged. I’ve manually done some testing to make sure it works.

@webvictim
Copy link
Contributor

Most Kubernetes tests are finished, just waiting on #6990 merge/backport (and rc.2 cut?) to verify the audit log entries:

Screenshot 2021-05-21 at 15 15 51

@awly
Copy link
Contributor

awly commented May 25, 2021

All issues are either resolved or not caused by 6.2.
Marking the testplan as done.

@awly awly closed this as completed May 25, 2021
@russjones
Copy link
Contributor Author

From @fspmarshall

6.2 - etcd - IoT

tsh bench --duration=30m root@loadtest-665c98bfb5-72w58 ls
* Requests originated: 17920
* Requests failed: 258
* Last error: connection closed
Histogram
Percentile Response Duration
---------- -----------------
25         4867 ms
50         6943 ms
75         9583 ms
90         14951 ms
95         20959 ms
99         40799 ms
100        65439 ms
tsh bench --interactive --duration=30m root@loadtest-665c98bfb5-9wk2b ps aux
* Requests originated: 17905
* Requests failed: 253
* Last error: connection error: desc = "transport: authentication handshake failed: EOF"
Histogram
Percentile Response Duration
---------- -----------------
25         4923 ms
50         7079 ms
75         9727 ms
90         15015 ms
95         20783 ms
99         41951 ms
100        64927 ms

6.2 - etcd - non-IoT

tsh bench --duration=30m root@loadtest-665c98bfb5-qcf82 ls
* Requests originated: 17983
* Requests failed: 23
* Last error: connection error: desc = "transport: authentication handshake failed: EOF"
Histogram
Percentile Response Duration
---------- -----------------
25         4719 ms
50         6567 ms
75         8703 ms
90         11143 ms
95         13439 ms
99         21263 ms
100        49183 ms
tsh bench --interactive --duration=30m root@loadtest-665c98bfb5-zfsrb ps aux
* Requests originated: 17970
* Requests failed: 17
* Last error: connection error: desc = "transport: authentication handshake failed: EOF"
Histogram
Percentile Response Duration
---------- -----------------
25         4655 ms
50         6391 ms
75         8327 ms
90         10703 ms
95         13079 ms
99         21759 ms
100        59423 ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
test-plan A list of tasks required to ship a successful product release.
Projects
None yet
Development

No branches or pull requests

10 participants